From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)heroku(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Magnus Hagander <magnus(at)hagander(dot)net> |
Subject: | Re: autovacuum_work_mem |
Date: | 2013-12-11 14:43:38 |
Message-ID: | CA+U5nMJiTwk=u0_2AVt+3SgkQ065CRKoqFuiLKUkRFeXasaNYg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 25 November 2013 21:51, Peter Geoghegan <pg(at)heroku(dot)com> wrote:
> On Sun, Nov 24, 2013 at 9:06 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> VACUUM uses 6 bytes per dead tuple. And autovacuum regularly removes
>> dead tuples, limiting their numbers.
>>
>> In what circumstances will the memory usage from multiple concurrent
>> VACUUMs become a problem? In those circumstances, reducing
>> autovacuum_work_mem will cause more passes through indexes, dirtying
>> more pages and elongating the problem workload.
>
> Yes, of course, but if we presume that the memory for autovacuum
> workers to do everything in one pass simply isn't there, it's still
> better to do multiple passes.
That isn't clear to me. It seems better to wait until we have the memory.
My feeling is this parameter is a fairly blunt approach to the
problems of memory pressure on autovacuum and other maint tasks. I am
worried that it will not effectively solve the problem. I don't wish
to block the patch; I wish to get to an effective solution to the
problem.
A better aproach to handling memory pressure would be to globally
coordinate workers so that we don't oversubscribe memory, allocating
memory from a global pool.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2013-12-11 14:47:37 | Re: [PATCH] Add transforms feature |
Previous Message | Andres Freund | 2013-12-11 14:35:42 | Re: -d option for pg_isready is broken |