From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | Olivier Gautherot <ogautherot(at)gautherot(dot)net> |
Cc: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Enhancement Request |
Date: | 2024-02-02 13:54:39 |
Message-ID: | CANzqJaDC7tMFAG-AJ_XdWfSLz0hGYEfAyPAJY5jy81bHm1r-wA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Fri, Feb 2, 2024 at 3:50 AM Olivier Gautherot <ogautherot(at)gautherot(dot)net>
wrote:
>
>
> El jue, 1 feb 2024 2:35, Ron Johnson <ronljohnsonjr(at)gmail(dot)com> escribió:
>
>> On Wed, Jan 31, 2024 at 3:51 PM Hajek, Nick <Nick(dot)Hajek(at)vishay(dot)com>
>> wrote:
>> [snip]
>>
>>> Delete from table1 where table1.id in (select table1.id from table1
>>> limit yourlimitnumber)
>>>
>>
>> The IN predicate is only efficient for a very small number of
>> elements, supported by an index. People (including me) who would find
>> DELETE FROM .. LIMIT TO ... useful want to delete a *lot* of rows (but not
>> all in one giant statement).
>>
>
> Deleting large numbers of rows is a complex task with a lot of hidden
> issues (index management between other things). Adding a LIMIT paradigm
> will not simplify it in any way.
>
Smaller "bites" are easier to manage than giant bites.
> I remember doing it on tables with over 50 millions rows and had my share
> of disaster recoveries. Partitions saved my life.
>
You must have been doing something wrong.
>
From | Date | Subject | |
---|---|---|---|
Next Message | M Sarwar | 2024-02-02 14:10:54 | Re: Enhancement Request |
Previous Message | Stefan Kohlhauser | 2024-02-02 11:25:12 | Postgres 16 slow "fast" shutdown when using streaming replication |