| From: | "Peter J(dot) Holzer" <hjp-pgsql(at)hjp(dot)at> |
|---|---|
| To: | pgsql-general(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Most effective and fast way to load few Tbyte of data from flat files into postgresql |
| Date: | 2020-08-25 11:24:00 |
| Message-ID: | 20200825112400.GA19594@hjp.at |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
On 2020-08-24 21:17:36 +0000, Dirk Krautschick wrote:
> what would be the fastest or most effective way to load few (5-10) TB
> of data from flat files into a postgresql database, including some 1TB
> tables and blobs?
>
> There is the copy command but there is no way for native parallelism,
> right? I have found pg_bulkload but haven't tested it yet. As far I
> can see EDB has its EDB*Loader as a commercial option.
A single COPY isn't parallel, but you can run several of them in
parallel (that's what pg_restore -j N does). So the total time may be
dominated by your largest table (or I/O bandwidth).
hp
--
_ | Peter J. Holzer | Story must make more sense than reality.
|_|_) | |
| | | hjp(at)hjp(dot)at | -- Charles Stross, "Creative writing
__/ | http://www.hjp.at/ | challenge!"
| From | Date | Subject | |
|---|---|---|---|
| Next Message | iulian dragos | 2020-08-25 12:01:18 | Re: Query plan prefers hash join when nested loop is much faster |
| Previous Message | David Rowley | 2020-08-25 10:36:30 | Re: Query plan prefers hash join when nested loop is much faster |