From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: large inserts and fsync |
Date: | 2008-09-05 15:32:58 |
Message-ID: | 20080905153258.GZ7271@frubble.xen.chris-lamb.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Sep 05, 2008 at 11:19:13AM -0400, Aaron Burnett wrote:
> On 9/5/08 11:10 AM, "Sam Mason" <sam(at)samason(dot)me(dot)uk> wrote:
> > On Fri, Sep 05, 2008 at 09:16:41AM -0400, Aaron Burnett wrote:
> >> For an upcoming release there is a 16 million row insert that on our test
> >> cluster takes about 2.5 hours to complete with all indices dropped
> >> beforehand.
> >>
> >> If I turn off fsync, it completes in under 10 minutes.
> >
> > Have you tried bundling all the INSERT statements into a single
> > transaction?
>
> Yes, the developer already made sure of that and I verified.
I was under the impression that the only time PG synced the data to disk
was when the transaction was COMMITed. I've never needed to turn off
fsync for performance reasons even when pulling in hundreds of millions
of rows. I do tend to use a single large COPY rather than many small
INSERT statements. PG spends an inordinate amount of time parsing
millions of SQL statements, whereas a tab delimited file is much easier
to parse.
Could you try bumping "checkpoint_segments" up a bit? or have you tried
that already?
Sam
From | Date | Subject | |
---|---|---|---|
Next Message | Alan Hodgson | 2008-09-05 15:39:08 | Re: large inserts and fsync |
Previous Message | Tom Lane | 2008-09-05 15:25:31 | Re: xml queries & date format |