From: | Vijaykumar Jain <vjain(at)opentable(dot)com> |
---|---|
To: | Christopher Browne <cbbrowne(at)gmail(dot)com>, "sravikrishna(at)aol(dot)com" <sravikrishna(at)aol(dot)com> |
Cc: | pgsql-generallists(dot)postgresql(dot)org <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [External] Multiple COPY on the same table |
Date: | 2018-08-20 17:28:24 |
Message-ID: | 6841C20B-0BED-4632-A355-D6C59E91ED18@opentable.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I guess this should help you, Ravi.
https://www.postgresql.org/docs/10/static/populate.html
On 8/20/18, 10:30 PM, "Christopher Browne" <cbbrowne(at)gmail(dot)com> wrote:
On Mon, 20 Aug 2018 at 12:53, Ravi Krishna <sravikrishna(at)aol(dot)com> wrote:
> > What is the goal you are trying to achieve here.
> > To make pgdump/restore faster?
> > To make replication faster?
> > To make backup faster ?
>
> None of the above.
>
> We got csv files from external vendor which are 880GB in total size, in 44 files. Some of the large tables had COPY running for several hours. I was just thinking of a faster way to load.
Seems like #4...
#4 - To Make Recovery faster
Using COPY pretty much *is* the "faster way to load"...
The main thing you should consider doing to make it faster is to drop
indexes and foreign keys from the tables, and recreate them
afterwards.
--
When confronted by a difficult problem, solve it by reducing it to the
question, "How would the Lone Ranger handle this?"
From | Date | Subject | |
---|---|---|---|
Next Message | Ron | 2018-08-20 17:49:31 | Re: [External] Multiple COPY on the same table |
Previous Message | Christopher Browne | 2018-08-20 17:00:26 | Re: [External] Multiple COPY on the same table |