From: | PG User 2010 <pguser2010(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | performance question on VACUUM FULL (Postgres 8.4.2) |
Date: | 2010-01-19 20:19:10 |
Message-ID: | 1e937d501001191219j15e2bb13o48d33d8c24c29f48@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello,
We are running into some performance issues with running VACUUM FULL on the
pg_largeobject table in Postgres (8.4.2 under Linux), and I'm wondering if
anybody here might be able to suggest anything to help address the issue.
Specifically, when running VACUUM FULL on the pg_largeobject table, it
appears that one of our CPUs is pegged at 100% (we have 8 on this particular
box), and the I/O load on the machine is VERY light (10-20 I/O operations
per second--nowhere near what our array is capable of). Our pg_largeobject
table is about 200 gigabytes, and I suspect that about 30-40% of the table
are dead rows (after having run vacuumlo and deleting large numbers of large
objects). We've tuned vacuum_cost_delay to 0.
I have read that doing a CLUSTER might be faster and less intrusive, but
trying that on the pg_largeobject table yields this: ERROR:
"pg_largeobject" is a system catalog
One other thing: it is possible to run VACUUM FULL for a while, interrupt
it, then run it again later and have it pick up from where it left off? If
so, then we could just break up the VACUUM FULL into more manageable chunks
and tackle it a few hours at a time when our users won't care. I thought I
read that some of the FSM changes in 8.4 would make this possible, but I'm
not sure if that applies here.
If anybody has any info here, it would be greatly appreciated. Thanks!
Sam
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2010-01-19 20:31:04 | Re: Inserting 8MB bytea: just 25% of disk perf used? |
Previous Message | Arjen van der Meijden | 2010-01-19 20:16:09 | Re: renice on an I/O bound box |