From: | Jim Nasby <decibel(at)decibel(dot)org> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | ITAGAKI Takahiro <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>, Simon Riggs <simon(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Quick idea for reducing VACUUM contention |
Date: | 2007-07-27 23:38:48 |
Message-ID: | C964C8DB-5A03-48F3-AC83-064D9E6B34D4@decibel.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Jul 27, 2007, at 1:49 AM, Alvaro Herrera wrote:
> ITAGAKI Takahiro wrote:
>> "Simon Riggs" <simon(at)2ndquadrant(dot)com> wrote:
>>
>>> Read the heap blocks in sequence, but make a conditional lock for
>>> cleanup on each block. If we don't get it, sleep, then try again
>>> when we
>>> wake up. If we fail the second time, just skip the block completely.
>
> It would be cool if we could do something like sweep a range of pages,
> initiate IO for those that are not in shared buffers, and while
> that is
> running, lock and clean up the ones that are in shared buffers,
> skipping
> those that are not lockable right away; when that's done, go back to
> those buffers that were gotten from I/O and clean those up. And retry
> the locking for those that couldn't be locked the first time around,
> also conditionally. And when that's all done, a third pass could get
> those blocks that weren't cleaned up in none of the previous passes
> (and
> this time the lock would not be conditional).
Would that be substantially easier than just creating a bgreader?
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2007-07-27 23:45:34 | Re: stats_block_level |
Previous Message | Jim Nasby | 2007-07-27 23:28:25 | Re: Machine available for community use |