Re: 7.3.1 takes long time to vacuum table?

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Mark Cave-Ayland <m(dot)cave-ayland(at)webbased(dot)co(dot)uk>, shridhar_daithankar(at)persistent(dot)co(dot)in, PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Subject: Re: 7.3.1 takes long time to vacuum table?
Date: 2003-02-20 02:33:16
Message-ID: 20030220023316.GE10807@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Feb 19, 2003 at 08:53:42PM -0500, Tom Lane wrote:
> Martijn van Oosterhout <kleptog(at)svana(dot)org> writes:
> > Well, consider that it's reading every single page in the table from the end
> > down to halfway (since every tuple was updated). If you went back in chunks
> > of 128K then the kernel may get a chance to cache the following
> > blocks.
>
> I fear this would be optimization with blinkers on :-(. The big reason
> that VACUUM FULL scans backwards is that at the very first (last?) page
> where it cannot push all the tuples down to lower-numbered pages, it
> can abandon any attempt to move more tuples. The file can't be made
> any shorter by internal shuffling, so we should stop. If you back up
> multiple pages and then scan forward, you would usually find yourself
> moving the wrong tuples, ie ones that cannot help you shrink the file.

I agree with the general idea. However, in this case there are 40GB+ of tuples
to move; if you moved backwards in steps of 2MB it would make no significant
difference on the resulting table. It would only be a problem near the end
of the compacting. Then you can stop, the remaining pages can surely be kept
track of in the FSM.

Next time you do a vacuum you can go back and do the compacting properly. On
tables of the size that matter here, I don't think anyone will care if the
last 2MB (=0.0044% of table) isn't optimally packed the first time round.

Does vacuum full have to produce the optimum result the first time?

> I suspect that what we really want here is a completely different
> algorithm (viz copy into a new file, like CLUSTER) when the initial scan
> reveals that there's more than X percent of free space in the file.

You could do the jump-back-in-blocks only if more than 30% of the table is
empty and table is over 1GB. For the example here, a simple defragging
algorithm would suffice; start at beginning and pack each tuple into the
beginning of the file. It will move *every* tuple but it's more cache
friendly. It's pretty extreme though.

It does preserve table order though whereas the current algorithm will
reverse the order of all the tuples in the table, possibly causing similar
backward-scan problems later with your index-scans.
--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> Support bacteria! They're the only culture some people have.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Eric B.Ridge 2003-02-20 02:51:25 REWRITE_INVOKE_MAX and "query may contain cycles"
Previous Message Tom Lane 2003-02-20 01:53:42 Re: 7.3.1 takes long time to vacuum table?