From: | Magnus Hagander <magnus(at)hagander(dot)net> |
---|---|
To: | spotluri(at)ismartpanache(dot)com |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: bulk data loading |
Date: | 2008-04-08 07:40:17 |
Message-ID: | 20080408094017.349949af@mha-laptop |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Potluri Srikanth wrote:
> Hi all,
>
> I need to do a bulk data loading around 704GB (log file size) at
> present in 8 hrs (1 am - 9am). The data file size may increase 3 to
> 5 times in future.
>
> Using COPY it takes 96 hrs to finish the task.
> What is the best way to do it ?
>
> HARDWARE: SUN THUMPER/ RAID10
> OS : SOLARIS 10.
> DB: Greenplum/Postgres
If you're using Greenplum, you should probably be talking to the
Greenplum folks. IIRC, they have made some fairly large changes to the
load process, so they'll be the ones with the proper answers for you.
//Magnus
From | Date | Subject | |
---|---|---|---|
Next Message | hubert depesz lubaczewski | 2008-04-08 08:41:39 | Re: bulk insert performance problem |
Previous Message | Ow Mun Heng | 2008-04-08 05:42:51 | Re: Forcing more agressive index scans for BITMAP AND |