From: | "Jonah H(dot) Harris" <jonah(dot)harris(at)gmail(dot)com> |
---|---|
To: | "Adonias Malosso" <malosso(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Best practice to load a huge table from ORACLE to PG |
Date: | 2008-04-27 01:14:53 |
Message-ID: | 36e682920804261814w3508b232n6cf935874b19bf31@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sat, Apr 26, 2008 at 9:25 AM, Adonias Malosso <malosso(at)gmail(dot)com> wrote:
> I´d like to know what´s the best practice to LOAD a 70 milion rows, 101
> columns table
> from ORACLE to PGSQL.
The fastest and easiest method would be to dump the data from Oracle
into CSV/delimited format using something like ociuldr
(http://www.anysql.net/en/ociuldr.html) and load it back into PG using
pg_bulkload (which is a helluva lot faster than COPY). Of course, you
could try other things as well... such as setting up generic
connectivity to PG and inserting the data to a PG table over the
database link.
Similarly, while I hate to see shameless self-plugs in the community,
the *fastest* method you could use is dblink_ora_copy, contained in
EnterpriseDB's PG+ Advanced Server; it uses an optimized OCI
connection to COPY the data directly from Oracle into Postgres, which
also saves you the intermediate step of dumping the data.
--
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation | fax: 732.331.1301
499 Thornall Street, 2nd Floor | jonah(dot)harris(at)enterprisedb(dot)com
Edison, NJ 08837 | http://www.enterprisedb.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2008-04-27 13:01:46 | Re: Best practice to load a huge table from ORACLE to PG |
Previous Message | Tom Lane | 2008-04-26 23:20:29 | Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix TransactionIdIsCurrentTransactionId() to use binary search |