Re: Enhancement Request

From: Olivier Gautherot <ogautherot(at)gautherot(dot)net>
To: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
Cc: Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org>
Subject: Re: Enhancement Request
Date: 2024-02-02 15:02:03
Message-ID: CAJ7S9TWVmEa0JKyOSA4xyV9dTtzDTZghDEkJAKFmeWpBTFAUgw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

El vie, 2 feb 2024 14:54, Ron Johnson <ronljohnsonjr(at)gmail(dot)com> escribió:

> On Fri, Feb 2, 2024 at 3:50 AM Olivier Gautherot <ogautherot(at)gautherot(dot)net>
> wrote:
>
>>
>>
>> El jue, 1 feb 2024 2:35, Ron Johnson <ronljohnsonjr(at)gmail(dot)com> escribió:
>>
>>>
>>> ...
>>>
>>
>> Deleting large numbers of rows is a complex task with a lot of hidden
>> issues (index management between other things). Adding a LIMIT paradigm
>> will not simplify it in any way.
>>
>
> Smaller "bites" are easier to manage than giant bites.
>

To some extent, yes. But when it comes to large quantities overall, you
have to consider vacuum, and it's best to take the DB offline for that. It
depends on your use case.

>
>> I remember doing it on tables with over 50 millions rows and had my share
>> of disaster recoveries. Partitions saved my life.
>>
>
> You must have been doing something wrong.
>

The mistake was to hope for the best and it didn't happen: we didn't take
the feeding process offline (over 1000 rows per minute) and, after 24 hour,
the DB was still trying to recover. We finally took everything offline for
2 hours and it stabilized. The delete process involved chunks of 15 million
rows at a time, worth 1 month of data - not a minor issue.

>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message M Sarwar 2024-02-02 15:04:27 Function is giving execution error from inside the procedure
Previous Message M Sarwar 2024-02-02 14:10:54 Re: Enhancement Request