| From: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Improve COPY performance for large data sets |
| Date: | 2008-09-10 17:17:37 |
| Message-ID: | 200809101917.40204.dfontaine@hi-media.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Hi,
Le mercredi 10 septembre 2008, Ryan Hansen a écrit :
> One thing I'm experiencing some trouble with is running a COPY of a
> large file (20+ million records) into a table in a reasonable amount of
> time. Currently it's taking about 12 hours to complete on a 64 bit
> server with 3 GB memory allocated (shared_buffer), single SATA 320 GB
> drive. I don't seem to get any improvement running the same operation
> on a dual opteron dual-core, 16 GB server.
You single SATA disk is probably very busy going from reading source file to
writing data. You could try raising checkpoint_segments to 64 or more, but a
single SATA disk won't give you high perfs for IOs. You're getting what you
payed for...
You could maybe ease the disk load by launching the COPY from a remote (local
netword) machine, and while at it if the file is big, try parallel loading
with pgloader.
Regards,
--
dim
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2008-09-10 17:18:02 | Re: too many clog files |
| Previous Message | Bill Moran | 2008-09-10 17:16:06 | Re: Improve COPY performance for large data sets |