From: | Greg Sabino Mullane <htamfids(at)gmail(dot)com> |
---|---|
To: | Lok P <loknath(dot)73(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: How to do faster DML |
Date: | 2024-02-03 18:28:47 |
Message-ID: | CAKAnmmLjqVucFywUozDnB495GnK3GRS-XS1o2_s0wxcBR3B4hw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
As a general rule, avoid heavy subselects like that. You don't need to
build a full list of duplicates before starting. Another approach:
create table mytable2 (like mytable1);
alter table mytable2 add primary key (id);
insert into mytable2 select * from mytable1 on conflict do nothing;
Given the size of your table, you probably want to divide that up.
As long as nothing is changing the original table, you could do:
insert into mytable2 select * from mytable1 order by ctid limit 10_000_000
offset 0;
insert into mytable2 select * from mytable1 order by ctid limit 10_000_000
offset 10_000_000;
insert into mytable2 select * from mytable1 order by ctid limit 10_000_000
offset 20_000_000;
etc.
Cheers,
Greg
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Olarte | 2024-02-03 19:20:01 | Re: How to do faster DML |
Previous Message | Lok P | 2024-02-03 16:09:44 | Re: How to do faster DML |