From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | arnaulist(at)andromeiberica(dot)com |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Massive delete of rows, how to proceed? |
Date: | 2006-11-27 08:44:28 |
Message-ID: | b42b73150611270044x16d2aecbpc3b24447be586270@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 11/25/06, Arnau <arnaulist(at)andromeiberica(dot)com> wrote:
> Hi all,
>
> I have a table with statistics with more than 15 million rows. I'd
> like to delete the oldest statistics and this can be about 7 million
> rows. Which method would you recommend me to do this? I'd be also
> interested in calculate some kind of statistics about these deleted
> rows, like how many rows have been deleted for date. I was thinking in
> creating a function, any recommendations?
a function, like an sql statement, operates in a single transaction
and you are locking quite a few records in this operation. merlin's
3rd rule: long running transactions are (usually) evil.
my gut says moving the keeper records to a swap table, dropping the
main table, and swapping the tables back might be better. However,
this kind of stuff can cause problems with logged in sessions because
of plan issues, beware.
do not write a function to delete records row by row unless you have
exhausted all other courses of action.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2006-11-27 09:22:56 | Re: shared_buffers > 284263 on OS X |
Previous Message | Guido Neitzer | 2006-11-27 08:08:58 | Plattform comparison (lies, damn lies and benchmarks) |