From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | David Mitchell <david(dot)mitchell(at)telogis(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Vacuum advice |
Date: | 2005-06-23 00:46:52 |
Message-ID: | 27938.1119487612@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
David Mitchell <david(dot)mitchell(at)telogis(dot)com> writes:
>> If you *are* using 8.0 then we need to look closer.
> Sorry, I should have mentioned, I am using PG 8.0. Also, although this
> is a 'mass insert', it's only kind of mass. While there are millions of
> rows, they are inserted in blocks of 500 (with a commit in between).
> We're thinking we might set up vacuum_cost_limit to around 100 and put
> vacuum_cost_delay at 100 and then just run vacuumdb in a cron job every
> 15 minutes or so, does this sound silly?
It doesn't sound completely silly, but if you are doing inserts and not
updates/deletes then there's not anything for VACUUM to do, really.
An ANALYZE command might get the same result with less effort.
I am however still wondering why 8.0 doesn't get it right without help.
Can you try a few EXPLAIN ANALYZEs as the table grows and watch whether
the cost estimates change?
(Also, if this is actually 8.0.0 and not a more recent dot-release,
I believe there were some bug fixes in this vicinity in 8.0.2.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2005-06-23 00:48:17 | Re: PROBLEM: Function does not exist |
Previous Message | Michael Fuhr | 2005-06-23 00:06:14 | Re: Setting global vars for use with triggers |