From: | Jan Peterson <jan(dot)l(dot)peterson(at)gmail(dot)com> |
---|---|
To: | Csaba Nagy <nagy(at)ecircle-ag(dot)com> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: How long it takes to vacuum a big table |
Date: | 2005-10-28 16:48:51 |
Message-ID: | 72e966b00510280948h15fd8d56qc3aaca7840f2348d@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
We've also experienced problems with VACUUM running for a long time.
A VACUUM on our pg_largeobject table, for example, can take over 24
hours to complete (pg_largeobject in our database has over 45million
rows). With our other tables, we've been able to partition them
(using inheritance) to keep any single table from getting "too large",
but we've been unable to do that with pg_largeobject. Currently,
we're experimenting with moving some of our bulk (large object) data
outside of the database and storing it in the filesystem directly.
I know that Hannu Krosing has developed some patches that allow
concurrent VACUUMs to run more effectively. Unfortunately, these
patches didn't get into 8.1 so far as I know. You can search the
performance mailing list for more information.
-jan-
--
Jan L. Peterson
<jan(dot)l(dot)peterson(at)gmail(dot)com>
From | Date | Subject | |
---|---|---|---|
Next Message | Rodrigo Madera | 2005-10-28 21:39:10 | Best way to check for new data. |
Previous Message | Csaba Nagy | 2005-10-28 15:14:21 | How long it takes to vacuum a big table |