From: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
---|---|
To: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | autovacuum maintenance_work_mem |
Date: | 2010-11-16 16:12:53 |
Message-ID: | 1289923546-sup-4272@alvh.no-ip.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Magnus was just talking to me about having a better way of controlling
memory usage on autovacuum. Instead of each worker using up to
maintenance_work_mem, which ends up as a disaster when DBA A sets to a
large value and DBA B raises autovacuum_max_workers, we could simply
have an "autovacuum_maintenance_memory" setting (name TBD), that defines
the maximum amount of memory that autovacuum is going to use regardless
of the number of workers.
So for the initial implementation, we could just have each worker set
its local maintenance_work_mem to autovacuum_maintenance_memory / max_workers.
That way there's never excessive memory usage.
This implementation is not ideal, because most of the time they wouldn't
use that much memory, and so vacuums could be slower. But I think it's
better than what we currently have.
Thoughts?
(A future implementation could improve things by using something like
the balancing code we have for cost_delay. But I don't want to go there
now.)
--
Álvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2010-11-16 16:21:48 | Re: GCC vs clang |
Previous Message | Heikki Linnakangas | 2010-11-16 16:06:21 | Re: B-tree parent pointer and checkpoints |