From: | Francisco Reyes <lists(at)stringsutils(dot)com> |
---|---|
To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
Cc: | Pgsql performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Vacuums on large busy databases |
Date: | 2006-09-14 23:30:52 |
Message-ID: | cone.1158276652.123899.75607.5001@35st-server.simplicato.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Jeff Davis writes:
>> shared_buffers = 10000
>
> Why so low?
My initial research was not thorough enough with regards to how to compute
how many to use.
You have a lot of memory, and shared_buffers are an
> important performance setting. I have a machine with 4GB of RAM, and I
> found my best performance was around 150000 shared buffers, which is a
> little more than 1GB.
Going to make it 256,000 (2GB)
> on a dedicated system, particularly with versions 8.1 and later.
Was reading that. Seems to be that around 1/4 of real memory is a good
starting point.
> Also, a VACUUM helps a table that gets UPDATEs and DELETEs. If you're
> doing mostly inserts on a big table, there may be no need to VACUUM it 3
> times per day. Try VACUUMing the tables that get more UPDATEs and
> DELETEs more often, and if a table has few UPDATEs/DELETEs, VACUUM it
> only occasionally.
Will have to talk to the developers. In particular for every insert there
are updates. I know they have at least one table that gets udpated to have
summarized totals.
One of the reasons I was doing the vacuumdb of the entire DB was to get the
number of shared-buffers. Now that I have an idea of how much I need I will
likely do something along the lines of what you suggest. One full for
everything at night and during the days perhaps do the tables that get more
updated. I also set more aggresive values on autovacuum so that should help
some too.
> You can run ANALYZE more frequently on all the
> tables, because it does not have to read the entire table and doesn't
> interfere with the rest of the operations.
On a related question. Right now I have my autovacuums set as:
autovacuum_vacuum_threshold = 50000
autovacuum_analyze_threshold = 100000
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.1
Based on what you described above then I could set my analyze values to the
same as the vacuum to have something like
autovacuum_vacuum_threshold = 50000
autovacuum_analyze_threshold = 50000
autovacuum_vacuum_scale_factor = 0.05
autovacuum_analyze_scale_factor = 0.05
For DBs with hundreds of GBs would it be better to get
autovacuum_analyze_scale_factor to even 0.01? The permanent DB is over 200GB
and growing.. the 100GB ones are staging.. By the time we have finished
migrating all the data from the old system it will be at least 300GB. 0.01
is still 3GB.. pretty sizable.
Do the thresholds tabke presedence over the scale factors? Is it basically
if either one of them gets hit that the action will take place?
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Reyes | 2006-09-14 23:50:56 | Re: Vacuums on large busy databases |
Previous Message | Bucky Jordan | 2006-09-14 22:50:21 | Re: High CPU Load |