From: | didier <did447(at)gmail(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | Mitsumasa KONDO <kondo(dot)mitsumasa(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: posix_fadvise() and pg_receivexlog |
Date: | 2014-09-09 12:07:55 |
Message-ID: | CAJRYxu+bJRXO7D1MtixfiRkXzpQG7gGqGUyigj1x6SQhpRM-VQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi
> Well, I'd like to hear someone from the field complaining that
> pg_receivexlog is thrashing the cache and thus reducing the performance of
> some other process. Or a least a synthetic test case that demonstrates that
> happening.
It's not with pg_receivexlog but it's related.
On a small box without replication server connected perfs were good
enough but not so with a replication server connected, there was 1GB
worth of WAL sitting in RAM vs next to nothing without slave!
setup:
8GB RAM
2GB shared_buffers (smaller has other issues)
checkpoint_segments 40 (smaller value trigger too much xlog checkpoint)
checkpoints spread over 10 mn and write 30 to 50% of shared buffers.
live data set fit in RAM.
constant load.
On startup (1 or 2/hour) applications were running requests on cold
data which were now saturating IO.
I'm not sure it's an OS bug as the WAL were 'hotter' than the cold data.
A cron task every minute with vmtouch -e for evicting old WAL files
from memory has solved the issue.
Regards
From | Date | Subject | |
---|---|---|---|
Next Message | Kyotaro HORIGUCHI | 2014-09-09 12:09:22 | Re: [TODO] Process pg_hba.conf keywords as case-insensitive |
Previous Message | Kyotaro HORIGUCHI | 2014-09-09 11:49:30 | Re: [TODO] Process pg_hba.conf keywords as case-insensitive |