From: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
---|---|
To: | Ang Chin Han <angch(at)bytecraft(dot)com(dot)my> |
Cc: | Christopher Browne <cbbrowne(at)acm(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Experimental patch for inter-page delay in VACUUM |
Date: | 2003-11-04 04:28:25 |
Message-ID: | 3FA72AE9.9090903@Yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Ang Chin Han wrote:
> Christopher Browne wrote:
>> Centuries ago, Nostradamus foresaw when "Stephen" <jleelim(at)xxxxxxx(dot)com> would write:
>>
>>>As it turns out. With vacuum_page_delay = 0, VACUUM took 1m20s (80s)
>>>to complete, with vacuum_page_delay = 1 and vacuum_page_delay = 10,
>>>both VACUUMs completed in 18m3s (1080 sec). A factor of 13 times!
>>>This is for a single 350 MB table.
>>
>>
>> While it is unfortunate that the minimum quanta seems to commonly be
>> 10ms, it doesn't strike me as an enormous difficulty from a practical
>> perspective.
>
> If we can't lower the minimum quanta, we could always vacuum 2 pages
> before sleeping 10ms, effectively sleeping 5ms.
>
> Say,
> vacuum_page_per_delay = 2
> vacuum_time_per_delay = 10
That's exactly what I did ... look at the combined experiment posted
under subject "Experimental ARC implementation". The two parameters are
named vacuum_page_groupsize and vacuum_page_delay.
>
> What would be interesting would be pg_autovacuum changing these values
> per table, depending on current I/O load.
>
> Hmmm. Looks like there's a lot of interesting things pg_autovacuum can do:
> 1. When on low I/O load, running multiple vacuums on different, smaller
> tables on full speed, careful to note that these vacuums will increase
> the I/O load as well.
> 2. When on high I/O load, vacuum big, busy tables slowly.
>
From what I see here the two parameters above together with the ARC
scan resistance and with the changed strategy where to place pages
faulted in by vacuum, I think one can pretty good handle that now. It's
certainly much better than before.
What still needs to be addressed is the IO storm cause by checkpoints. I
see it much relaxed when stretching out the BufferSync() over most of
the time until the next one should occur. But the kernel sync at it's
end still pushes the system hard against the wall.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
From | Date | Subject | |
---|---|---|---|
Next Message | Neil Conway | 2003-11-04 04:49:24 | Re: equal() perf tweak |
Previous Message | Ang Chin Han | 2003-11-04 04:07:55 | Re: Experimental patch for inter-page delay in VACUUM |