Re: UPDATE on 20 Million Records Transaction or not?

From: Michael Lewis <mlewis(at)entrata(dot)com>
To: Ganesh Korde <ganeshakorde(at)gmail(dot)com>
Cc: Ron <ronljohnsonjr(at)gmail(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: UPDATE on 20 Million Records Transaction or not?
Date: 2020-06-23 17:42:50
Message-ID: CAHOFxGr2j_BYN35By3GJOhevHXPOLV4sKgTgPfqec8wxg_tN4A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

>
> >Are you updating *every* row in the table?
>>
>> No I am using an update like so: UPDATE members SET regdate='2038-01-18'
>> WHERE regdate='2020-07-07'
>>
>> DB=# select count(*) from members where regdate = '2020-07-07';
>>
>> count
>>
>> ----------
>>
>> 17333090
>>
>> (1 row)
>>
>>
Just update them and be done with it. Do the work in batches if it doesn't
matter that concurrent accesses to the table might see some rows that have
old value and some new. Given you are on PG11 and can commit within a
function, you could adapt something like this to just run until done. Oh,
if you have not tuned the autovacuum process much, depending on the total
number of rows in your table, it could be good to manually vacuum once or
twice during this process so that space is reused.

*the below was shared by Andreas Krogh on one of these mailing lists about
6 months back.

do $_$
declare
num_rows bigint;
begin
loop
delete from YourTable where id in
(select id from YourTable where id <
500 limit 100);
commit;
get diagnostics num_rows = row_count;
raise notice 'deleted % rows', num_rows;
exit when num_rows = 0;
end loop;
end;$_$;

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Edu Gargiulo 2020-06-23 20:23:58 Re: pg_dump empty tables
Previous Message Michael Lewis 2020-06-23 17:33:26 Re: n_distinct off by a factor of 1000