From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> |
Cc: | Luke Lonergan <llonergan(at)greenplum(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: random observations while testing with a 1,8B row table |
Date: | 2006-03-10 19:54:10 |
Message-ID: | 8043.1142020450@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Stefan Kaltenbrunner <stefan(at)kaltenbrunner(dot)cc> writes:
>>> 3. vacuuming this table - it turned out that VACUUM FULL is completly
>>> unusable on a table(which i actually expected before) of this size not
>>> only to the locking involved but rather due to a gigantic memory
>>> requirement and unbelievable slowness.
> sure, that was mostly meant as an experiment, if I had to do this on a
> production database I would most likely use CLUSTER to get the desired
> effect (which in my case was purely getting back the diskspace wasted by
> dead tuples)
Yeah, the VACUUM FULL algorithm is really designed for situations where
just a fraction of the rows have to be moved to re-compact the table.
It might be interesting to teach it to abandon that plan and go to a
CLUSTER-like table rewrite once the percentage of dead space is seen to
reach some suitable level. CLUSTER has its own disadvantages though
(2X peak disk space usage, doesn't work on core catalogs, etc).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Luke Lonergan | 2006-03-10 20:02:43 | Re: random observations while testing with a 1,8B row |
Previous Message | Stefan Kaltenbrunner | 2006-03-10 19:48:23 | Re: random observations while testing with a 1,8B row table |