From: | Luke Lonergan <llonergan(at)greenplum(dot)com> |
---|---|
To: | Adonias Malosso <malosso(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Best practice to load a huge table from ORACLE to PG |
Date: | 2008-04-26 16:13:34 |
Message-ID: | C438A2BE.5C656%llonergan@greenplum.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Yep just do something like this within sqlplus (from
http://www.dbforums.com/showthread.php?t=350614)
set termout off
set hea off
set pagesize 0
spool c:\whatever.csv
select a.a||','||a.b||','||a.c
from a
where a.a="whatever";
spool off
COPY is the fastest approach to get it into PG.
- Luke
On 4/26/08 6:25 AM, "Adonias Malosso" <malosso(at)gmail(dot)com> wrote:
> Hi All,
>
> I´d like to know what´s the best practice to LOAD a 70 milion rows, 101
> columns table
> from ORACLE to PGSQL.
>
> The current approach is to dump the data in CSV and than COPY it to
> Postgresql.
>
> Anyone has a better idea.
>
>
> Regards
> Adonias Malosso
>
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-04-26 17:26:44 | Re: Re: [HACKERS] [COMMITTERS] pgsql: Fix TransactionIdIsCurrentTransactionId() to use binary search |
Previous Message | Joshua D. Drake | 2008-04-26 15:33:12 | Re: Best practice to load a huge table from ORACLE to PG |