From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | Jim Nasby <jim(at)nasby(dot)net> |
Cc: | Christopher Browne <cbbrowne(at)gmail(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: 9.3 feature proposal: vacuumdb -j # |
Date: | 2012-01-18 01:24:59 |
Message-ID: | 4F161F6B.9020808@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 01/17/2012 07:09 PM, Jim Nasby wrote:
> On Jan 13, 2012, at 4:15 PM, Christopher Browne wrote:
>> Have two logical tasks:
>> a) A process that manages the list, and
>> b) Child processes doing vacuums.
>>
>> Each time a child completes a table, it asks the parent for another one.
> There is also a middle ground, because having the the scheduling process sounds like a lot more work than what Josh was proposing.
>
> CREATE TEMP SEQUENCE s;
> SELECT relname, s mod<number of backends> AS backend_number
> FROM ( SELECT relname
> FROM pg_class
> ORDER BY relpages
> );
>
> Of course, having an actual scheduling process is most likely the most efficient.
We already have a model for this in parallel pg_restore. It would
probably not be terribly hard to adapt to parallel vacuum.
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2012-01-18 01:28:38 | Re: Should I implement DROP INDEX CONCURRENTLY? |
Previous Message | Robert Haas | 2012-01-18 01:23:23 | Re: Group commit, revised |