From: | "Vanole, Mike" <MV5492(at)att(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Best approach for large table maintenance |
Date: | 2008-04-22 17:04:27 |
Message-ID: | C9C075DB3961464180CE3DEF766B4A2C07EB4376@ad01msxmb007.US.Cingular.Net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I have an application where I drop, recreate, reload, and recreate
indexes on a 1 million row table each day. I do this to avoid having to
run vacuum on the table in the case where I might use DELETE or UPDATEs
on deltas.
It seems that running vacuum still has value in the above approach
because I still see index row versions were removed. I do not explicitly
drop the indexes because they are dropped with the table.
In considering the use of TRUNCATE I sill have several indexes that if
left in place would slow down the data load.
My question is, what is the best way to manage a large table that gets
reloaded each day?
Drop
Create Table
Load (copy or insert/select)
Create Indexes
Vacuum anyway?
Or...
DROP indexes
Truncate
Load (copy or insert/select)
Create Indexes
And is vacuum still going to be needed?
Many Thanks,
Mike
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Huxton | 2008-04-22 17:09:03 | Re: can't cast char to in |
Previous Message | Leandro Casadei | 2008-04-22 16:17:42 | Updating with a subselect |