From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
Cc: | Mark Cave-Ayland <m(dot)cave-ayland(at)webbased(dot)co(dot)uk>, shridhar_daithankar(at)persistent(dot)co(dot)in, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: 7.3.1 takes long time to vacuum table? |
Date: | 2003-02-20 01:53:42 |
Message-ID: | 16222.1045706022@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Martijn van Oosterhout <kleptog(at)svana(dot)org> writes:
> Well, consider that it's reading every single page in the table from the end
> down to halfway (since every tuple was updated). If you went back in chunks
> of 128K then the kernel may get a chance to cache the following
> blocks.
I fear this would be optimization with blinkers on :-(. The big reason
that VACUUM FULL scans backwards is that at the very first (last?) page
where it cannot push all the tuples down to lower-numbered pages, it
can abandon any attempt to move more tuples. The file can't be made
any shorter by internal shuffling, so we should stop. If you back up
multiple pages and then scan forward, you would usually find yourself
moving the wrong tuples, ie ones that cannot help you shrink the file.
I suspect that what we really want here is a completely different
algorithm (viz copy into a new file, like CLUSTER) when the initial scan
reveals that there's more than X percent of free space in the file.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Martijn van Oosterhout | 2003-02-20 02:33:16 | Re: 7.3.1 takes long time to vacuum table? |
Previous Message | Robert Fitzpatrick | 2003-02-20 00:36:27 | Authentication to run pg_dump automatically |