From: | Thomas Kellerer <spam_eater(at)gmx(dot)net> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Disabling/Enabling index before bulk loading |
Date: | 2018-07-10 14:24:59 |
Message-ID: | 426a4f2b-7bb3-8270-47d5-e1a71607cfa3@gmx.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ravi Krishna schrieb am 10.07.2018 um 16:08:
>
> We recently did a test on COPY and found that on large tables (47 million rows , 20GB of raw data) the
> difference in COPY with 16 indexes and COPY without any index is 1:14. That is, COPY is 14 times slower
> when data is ingested with all indexes as opposed to COPY first without index and then create all index.
>
> I googled for earlier posting on this and it looks like this has been asked before too.
>
> This is what I am thinking to do:
>
> 1 - Extract index definition and save it as a SQL somewhere, either a file or a table.
> 2 - Drop all indexes.
> 3 - Ingest data via COPY
> 4 - Recreate all indexes saved in (1).
>
> Is there a generic sql or script or tool to accomplish (1).
You can do this with a script like this:
-- First store the definitions of all the indexes in a table:
create table index_backup
as
select *
from pg_indexes
where schemaname = 'public'
and tablename = 'the_table';
-- now drop all the indexes:
do
$$
declare
l_rec record;
begin
for l_rec in select schemaname, indexname from index_backup
loop
execute format('drop index %I.%I', l_rec.schemaname, l_rec.indexname);
end loop;
end;
$$
-- now insert the data
...
-- and restore all indexes
do
$$
declare
l_rec record;
begin
for l_rec in select indexdef from index_backup
loop
execute l_rec.indexdef;
end loop;
end;
$$
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2018-07-10 16:43:40 | Re: Disabling/Enabling index before bulk loading |
Previous Message | Ravi Krishna | 2018-07-10 14:08:32 | Disabling/Enabling index before bulk loading |