Re: Need an idea to operate massive delete operation on big size table.

From: Alex Balashov <abalashov(at)evaristesys(dot)com>
To: pgsql-admin(at)lists(dot)postgresql(dot)org
Subject: Re: Need an idea to operate massive delete operation on big size table.
Date: 2025-01-15 21:24:53
Message-ID: F2F9DB44-8DBA-4606-A570-1C12C588A7BC@evaristesys.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

In my experience, mass deletions are tough. There may be a supporting index to assist the broadest criteria, but the filtered rows that result must still be sequentially scanned for non-indexed sub-criteria[1]. That can still be an awful lot of rows and a huge, time-consuming workload.

While it won't help with deduplication, partitioning is a very good, if somewhat labour-intensive solution to the problem of aging old data off the back of a rolling archive. Once upon a time, I had an installation with a periodic hygienic `DELETE` once or twice a year, which took many hours to plan and execute, and placed considerable demand on the system. We switched to monthly partitioning and the result was, to some, indistinguishable from magic.

-- Alex

[1] Which normally doesn't make sense to index, in the overall tradeoff of index size and maintenance overhead vs. performance payoff.

--
Alex Balashov
Principal Consultant
Evariste Systems LLC
Web: https://evaristesys.com
Tel: +1-706-510-6800

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Rajesh Kumar 2025-01-15 23:00:47 Re: Move datapath
Previous Message Laurenz Albe 2025-01-15 21:02:14 Re: Need an idea to operate massive delete operation on big size table.