From: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
---|---|
To: | Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> |
Cc: | "Matthew T(dot) O'Connor" <matthew(at)zeut(dot)net>, Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>, Peter Eisentraut <peter_e(at)gmx(dot)net>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pg_autovacuum next steps |
Date: | 2004-03-22 13:27:23 |
Message-ID: | 405EE9BB.6000401@Yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Gavin Sherry wrote:
> On Sun, 21 Mar 2004, Matthew T. O'Connor wrote:
>
>> On Sun, 2004-03-21 at 20:31, Christopher Kings-Lynne wrote:
>> > > I think these configuration issues will become a lot easier if you make
>> > > the autovacuum daemon a subprocess of the postmaster (like, say, the
>> > > checkpoint process). Then you have access to a host of methods for
>> > > storing state, handling configuration, etc.
>> >
>> > Yeah - why delay making it a backend process? :)
>>
>> Ok, well this was part of the reason to have this conversation.
>>
>> My reasons:
>> A) I wasn't sure if people really thought this was ready to be
>> integrated. Tom had said a while ago, that it was a good to keep it as
>> a contrib module while it's still actively being developed.
>
> I was talking to Jan about some other work on VACUUM related to more
> intelligent vacuuming. Namely, maintaining a map (outside of shared
> memory) of blocks which have been pushed out of the free space map for
> VACUUM to visit (which requires a backend process) and being aware of load
> restrictions (ie, allowing user to only vacuum when the load average is
> less than X, for example) and some other leveling stuff to ensure that
> availability is consistent. Whilst this doesn't related to pg_autovacuum
> specifically, it'd be great if they could be released at the same time, I
> think.
I don't recall the "outside of shared memory" part. Anyhow, the whole
story goes like this:
Maintain 2 bits per block that tell if the block has been vaccumed of
all dead tuples since the last time it was dirtied, and if all its
tuples are completely frozen. If those two conditions are true, there is
no need to vacuum that block at all (Red Flag!!! On further thinking I
realized that this assumes that the FSM is loss free).
With a default 8K blocksize, this means 32K per 1GB segment, making 4
additional blocks. I actually think that these extra blocks should be
somehow part of the heap files, so that they are subject to the regular
buffer management.
To keep the lock contention on them low, vacuum and backends will
set/clear new flags in the bufhdr flags member. That way, the bgwriter
and checkpointer will be the usual suspects to set/clear these flags in
the shared bitmap array stored in the extra blocks.
As to where to store these blocks, some block number arithmetic magic
comes to mind. That way a blocks relnode and blockno automatically lead
to the bits, even in the case of blind writes.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
From | Date | Subject | |
---|---|---|---|
Next Message | Grace Mou | 2004-03-22 14:38:10 | A student research project on triggers |
Previous Message | Andrew Dunstan | 2004-03-22 12:54:04 | Re: [HACKERS] listening addresses |