From: | Christopher Browne <cbbrowne(at)gmail(dot)com> |
---|---|
To: | sravikrishna(at)aol(dot)com |
Cc: | vjain(at)opentable(dot)com, "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [External] Multiple COPY on the same table |
Date: | 2018-08-20 17:00:26 |
Message-ID: | CAFNqd5VTrOZP=zW0x0myGDgvx1-U0-6SH-Pg64igW4CfqYDYkg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna <sravikrishna(at)aol(dot)com> wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore faster?
> > To make replication faster?
> > To make backup faster ?
>
> None of the above.
>
> We got csv files from external vendor which are 880GB in total size, in 44 files. Some of the large tables had COPY running for several hours. I was just thinking of a faster way to load.
Seems like #4...
#4 - To Make Recovery faster
Using COPY pretty much *is* the "faster way to load"...
The main thing you should consider doing to make it faster is to drop
indexes and foreign keys from the tables, and recreate them
afterwards.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
From | Date | Subject | |
---|---|---|---|
Next Message | Vijaykumar Jain | 2018-08-20 17:28:24 | Re: [External] Multiple COPY on the same table |
Previous Message | Ravi Krishna | 2018-08-20 16:52:26 | Re: [External] Multiple COPY on the same table |