Massive delete from a live production DB

From: Phoenix Kiula <phoenix(dot)kiula(at)gmail(dot)com>
To: PG-General Mailing List <pgsql-general(at)postgresql(dot)org>
Subject: Massive delete from a live production DB
Date: 2011-05-12 14:23:38
Message-ID: BANLkTi=H+G1nqus+KCyv94eVqwBrhkjjbw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi

Been reading some old threads (pre 9.x version) and it seems that the
consensus is to avoid doing massive deletes from a table as it'll
create so much unrecoverable space/gaps that vacuum full would be
needed. Etc.

Instead, we might as well do a dump/restore. Faster, cleaner.

This is all well and good, but what about a situation where the
database is in production and cannot be brought down for this
operation or even a cluster?

Any ideas on what I could do without losing all the live updates? I
need to get rid of about 11% of a 150 million rows of database, with
each row being nearly 1 to 5 KB in size...

Thanks! Version is 9.0.4.

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Albe Laurenz 2011-05-12 14:51:43 Re: Read Committed transaction with long query
Previous Message Phoenix Kiula 2011-05-12 14:15:48 Re: Regexp match not working.. (SQL help)