Re: Best way to delete big amount of records from big table

From: Justin Pryzby <pryzby(at)telsasoft(dot)com>
To: Michael Lewis <mlewis(at)entrata(dot)com>
Cc: Ekaterina Amez <ekaterina(dot)amez(at)zunibal(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Best way to delete big amount of records from big table
Date: 2020-03-27 15:09:54
Message-ID: 20200327150954.GG20103@telsasoft.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Fri, Mar 27, 2020 at 08:41:04AM -0600, Michael Lewis wrote:
> 2) If you are deleting/moving most of the table (91 of 150 million),
> consider moving only the records you are keeping to a new table, renaming
> old table, and renaming new table back to original name. Then you can do
> what you want to shift the data in the old table and delete it.

You could also make the old table a child of (inherit from) the new table.
That allows you to remove rows separately from removing them.
Partitioning (with legacy inheritence or the new, integrated way available in
postgres 10) allows DROPing oldest tables rather than DELETEing from one
gigantic table.

You should consider somehow cleaning up the old table after you DELETE from it,
maybe using vacuum full (which requires a long exclusive lock) or pg_repack
(which briefly acquires an exclusive lock).

--
Justin

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Ekaterina Amez 2020-03-27 15:15:04 Re: Best way to delete big amount of records from big table
Previous Message Rick Otten 2020-03-27 15:03:10 Re: Best way to delete big amount of records from big table