From: | "Steven Flatt" <steven(dot)flatt(at)gmail(dot)com> |
---|---|
To: | "Jim C(dot) Nasby" <decibel(at)decibel(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Vacuum looping? |
Date: | 2007-07-30 16:04:08 |
Message-ID: | 357fa7590707300904u4c2eb219p87793b5d19730638@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 7/28/07, Jim C. Nasby <decibel(at)decibel(dot)org> wrote:
>
> What are your vacuum_cost_* settings? If you set those too aggressively
> you'll be in big trouble.
autovacuum_vacuum_cost_delay = 100
autovacuum_vacuum_cost_limit = 200
These are generally fine, autovacuum keeps up, and there is minimal impact
on the system.
vacuum_cost_delay = 100
vacuum_cost_limit = 1000
We set this cost_limit a little higher so that, in the few cases where we
have to intervene manually, vacuum runs faster.
The second pass on the vacuum means that maintenance_work_memory isn't
> large enough.
maintenance_work_mem is set to 256MB and I don't think we want to make this
any bigger by default. Like I say above, generally autovacuum runs fine.
If we do run into this situation again (lots of OOM queries and lots to
cleanup), we'll probably increase maintenance_work_mem locally and run a
vacuum in that session.
Good to know that vacuum was doing the right thing.
Thanks,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Nis Jørgensen | 2007-07-30 16:30:50 | Re: Slow query with backwards index scan |
Previous Message | Richard Huxton | 2007-07-30 14:58:31 | Re: Questions on Tags table schema |