From: | David Mitchell <david(dot)mitchell(at)telogis(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Vacuum advice |
Date: | 2005-06-22 20:31:56 |
Message-ID: | 42B9CABC.3000108@telogis.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I have a system that has a moderate amount of activity on it, nothing
strenuous. The activity is a real mixture of operations: selects,
updates, inserts and deletes. One thing strange about our database is
that we have a log of stored procedures that use temporary tables. Most
of the time the system runs fine, and a nightly vacuum does the job.
However, occasionally we need to import data, and this involves
inserting several million rows into a table, but this just *cripples*
postgres. After the import has been running for a while, simple selects
take a long time, and strangely, the query planner starts using a
sequential scan on a table with millions of items in it. Surely inserts
shouldn't be doing this? Should I be running a vacuum during the import?
After how many operations should I be calling vacuum? We tried using
pg_autovacuum but it didn't seem to do a very good job since we would
often find the query planner getting it wrong, and a vacuum analyze
would fix it.
Part of the import is to update items in a table to keep track of where
the import is up to. The update is just setting an integer column, so
does this create a dead tuple or will postgres alter the column in-place?
Thanks for your help
--
David Mitchell
Software Engineer
Telogis
From | Date | Subject | |
---|---|---|---|
Next Message | CSN | 2005-06-22 20:36:20 | Setting global vars for use with triggers |
Previous Message | Magnus Hagander | 2005-06-22 20:24:35 | Re: [HACKERS] [PATCHES] Removing Kerberos 4 |