From: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Cc: | Greg Smith <gsmith(at)gregsmith(dot)com>, Adonias Malosso <malosso(at)gmail(dot)com> |
Subject: | Re: Best practice to load a huge table from ORACLE to PG |
Date: | 2008-04-28 07:49:37 |
Message-ID: | 200804280949.40101.dfontaine@hi-media.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
Le dimanche 27 avril 2008, Greg Smith a écrit :
> than SQL*PLUS. Then on the PostgreSQL side, you could run multiple COPY
> sessions importing at once to read this data all back in, because COPY
> will bottleneck at the CPU level before the disks will if you've got
> reasonable storage hardware.
Latest pgloader version has been made to handle this exact case, so if you
want to take this route, please consider pgloader 2.3.0:
http://pgloader.projects.postgresql.org/#_parallel_loading
http://pgfoundry.org/projects/pgloader/
Another good reason to consider using pgloader is when the datafile contains
erroneous input lines and you don't want the COPY transaction to abort. Those
error lines will get rejected out by pgloader while the correct ones will get
COPYied in.
Regards,
--
dim
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Stark | 2008-04-28 09:17:08 | Re: [pgsql-advocacy] Benchmarks WAS: Sun Talks about MySQL |
Previous Message | Vlad Arkhipov | 2008-04-28 02:13:32 | Simple JOIN problem |