Re: autocommit (true/false) for more than 1 million records

From: David Johnston <david(dot)g(dot)johnston(at)gmail(dot)com>
To: emilu(at)encs(dot)concordia(dot)ca
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: autocommit (true/false) for more than 1 million records
Date: 2014-08-25 13:51:25
Message-ID: CAKFQuwYaGZmpv6ZxNqifs0DNw2y_-Pf2CcR55zuK8fACjr0eMw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Aug 25, 2014 at 9:40 AM, Emi Lu <emilu(at)encs(dot)concordia(dot)ca> wrote:

>
> By the way, could someone let me know why set autocommit(false) is for
> sure faster than true please? Or, some online docs talk about this.
>
>
​Not sure about the docs specifically but:

Commit is expensive because as soon as it is issued all of the data has to
be guaranteed written. ​While ultimately the same amount of data is
guaranteed by doing them in batches there is opportunity to achieve
economies of scale.

(I think...)
When you commit you flush data to disk - until then you can make use of
RAM. Once you exhaust RAM you might as well commit and free up that RAM
for the next batch.

David J.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Felipe Santos 2014-08-25 14:02:52 Re: autocommit (true/false) for more than 1 million records
Previous Message Emi Lu 2014-08-25 13:40:07 Re: autocommit (true/false) for more than 1 million records