From: | youness bellasri <younessbellasri(at)gmail(dot)com> |
---|---|
To: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
Cc: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Need an idea to operate massive delete operation on big size table. |
Date: | 2025-01-15 15:12:19 |
Message-ID: | CAP44Ew=Bsk5VLZMDXV0ZbeuwZqBmh6XG4Tg2iKnFoyq_-koxvA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
1. *Batch Deletion*
Instead of deleting all records at once, break the operation into smaller
batches. This reduces locking, transaction log growth, and the risk of
timeouts.
2. *Use Indexes*
Ensure that the columns used in the WHERE clause of the delete queries are
indexed. This speeds up the identification of rows to delete.
3. *Disable Indexes and Constraints Temporarily*
If the table has many indexes or constraints, disabling them during the
delete operation can speed up the process. Re-enable them afterward.
Le mer. 15 janv. 2025 à 16:08, Ron Johnson <ronljohnsonjr(at)gmail(dot)com> a
écrit :
> On Wed, Jan 15, 2025 at 9:54 AM Gambhir Singh <gambhir(dot)singh05(at)gmail(dot)com>
> wrote:
>
>> Hi,
>>
>> I received a request from a client to delete duplicate records from a
>> table which is very large in size.
>>
>> Delete queries (~2 Billion) are provided via file, and we have to execute
>> that file in DB. Last time it lasted for two days. I feel there must be
>> another way to delete records in an efficient manner
>>
>
> Maybe the delete "queries" are poorly written. Maybe there's no
> supporting index.
>
> --
> Death to <Redacted>, and butter sauce.
> Don't boil me, I'm still alive.
> <Redacted> lobster!
>
From | Date | Subject | |
---|---|---|---|
Next Message | Ron Johnson | 2025-01-15 15:22:21 | Re: Need an idea to operate massive delete operation on big size table. |
Previous Message | Ron Johnson | 2025-01-15 15:08:32 | Re: Need an idea to operate massive delete operation on big size table. |