From: | Greg Smith <gsmith(at)gregsmith(dot)com> |
---|---|
To: | John Rouillard <rouilj(at)renesys(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Very poor performance loading 100M of sql data using copy |
Date: | 2008-04-29 15:58:00 |
Message-ID: | Pine.GSO.4.64.0804291149450.8414@westnet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 29 Apr 2008, John Rouillard wrote:
> So swap the memory usage from the OS cache to the postgresql process.
> Using 1/4 as a guideline it sounds like 600,000 (approx 4GB) is a
> better setting. So I'll try 300000 to start (1/8 of memory) and see
> what it does to the other processes on the box.
That is potentially a good setting. Just be warned that when you do hit a
checkpoint with a high setting here, you can end up with a lot of data in
memory that needs to be written out, and under 8.2 that can cause an ugly
spike in disk writes. The reason I usually threw out 30,000 as a
suggested starting figure is that most caching disk controllers can buffer
at least 256MB of writes to keep that situation from getting too bad.
Try it out and see what happens, just be warned that's the possible
downside of setting shared_buffers too high and therefore you might want
to ease into that more gradually (particularly if this system is shared
with other apps).
x
--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
From | Date | Subject | |
---|---|---|---|
Next Message | Shane Ambler | 2008-04-29 16:48:10 | Re: Replication Syatem |
Previous Message | John Rouillard | 2008-04-29 15:16:22 | Re: Very poor performance loading 100M of sql data using copy |