Re: Postgres batch write very slow - what to do

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: femski <hypertree(at)yahoo(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Postgres batch write very slow - what to do
Date: 2007-03-15 02:26:20
Message-ID: b42b73150703141926p220b86c3m663f12aabd678d78@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 3/15/07, femski <hypertree(at)yahoo(dot)com> wrote:
>
> I am using Oracle XE so its using only one core and doing just fine.
> How do I split backend to 4+ processes ?
> I don't want to write a multithreaded loader app.
> And I didn't think Postgres can utilize multiple cores for the
> same insert statement.

well, what sql is the jdbc driver creating exactly? It is probably
running inserts in a transaction. your load is about 17k inserts/sec
which about right for postgres on your hardware. you have the
following options to play increase insert performance:

* tweak postgresql.conf
fsync: off it is not already
wal_segments: bump to at least 24 or so
maintenance_work_mem: if you create key after insert, bump this high
(it speeds create index)
bgwriter settings: you can play with these, try disabling bgwriter
first (maxpages=0)
full_page_writes=off might help, not 100% sure about this

* distribute load
make load app multi thrreaded.

* use copy for bulk load
[is there a way to make jdbc driver do this?]

* use multi-line inserts (at least 10 rows/insert)...nearly as fast as copy

* if jdbc driver is not already doing so, prepare your statements and execute.

merlin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alexey Romanchuk 2007-03-15 07:58:47 Determine dead tuples size
Previous Message femski 2007-03-15 01:24:36 Re: Postgres batch write very slow - what to do