From: | Tomas Vondra <tv(at)fuzzy(dot)cz> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Massive delete from a live production DB |
Date: | 2011-05-12 20:09:34 |
Message-ID: | 4DCC3E7E.6000208@fuzzy.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Dne 12.5.2011 16:23, Phoenix Kiula napsal(a):
> Hi
>
> Been reading some old threads (pre 9.x version) and it seems that the
> consensus is to avoid doing massive deletes from a table as it'll
> create so much unrecoverable space/gaps that vacuum full would be
> needed. Etc.
>
> Instead, we might as well do a dump/restore. Faster, cleaner.
>
> This is all well and good, but what about a situation where the
> database is in production and cannot be brought down for this
> operation or even a cluster?
>
> Any ideas on what I could do without losing all the live updates? I
> need to get rid of about 11% of a 150 million rows of database, with
> each row being nearly 1 to 5 KB in size...
>
> Thanks! Version is 9.0.4.
One of the possible recipes in such case is usually a partitioning. If
you can divide the data so that a delete is equal to a drop of a
partition, then you don't need to worry about vacuum etc.
But the partitioning has it's own problems - you can't reference the
partitioned table using foreign keys, the query plans often are not as
efficient as with a non-partitioned table etc.
regards
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Sullivan | 2011-05-13 00:08:18 | Re: pg_dump on Hot standby : clarification on how to |
Previous Message | Gauthier, Dave | 2011-05-12 20:07:49 | Re: insert order question |