From: | "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
---|---|
To: | "Sergey A(dot)" <n39052(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to force PostgreSQL to use multiple cores within one connection? |
Date: | 2008-10-01 13:24:20 |
Message-ID: | b42b73150810010624t68aa49d6gd6408c6bfba1739b@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 1, 2008 at 6:44 AM, Sergey A. <n39052(at)gmail(dot)com> wrote:
> Hello.
>
> My application generates a large amount of inserts (~ 2000 per second)
> using one connection to PostgreSQL. All queries are buffered in memory
> and then the whole buffers are send to DB. But when I use two
> connections to PostgreSQL instead of one on dual core CPU (i.e. I use
> two processes of PostgreSQL) to insert my buffers I see that things
> goes 1.6 times faster.
>
> Using several connections in my application is somewhat tricky, so I
> want to move this problem to PostgreSQL's side. Is there any method
> for PostgreSQL to process huge inserts coming from one connection on
> different cores?
If you are buffering inserts, you can get an easy performance boost by
using copy as others have suggested. Another approach is to use
mutli-row insert statement:
insert into something values (1,2,3), (2,4,6), ...
Using multiple cpu basically requires multiple connections. This can
be easy or difficult depending on how you are connecting to the
database.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Nikolas Everett | 2008-10-01 13:55:31 | Re: How to force PostgreSQL to use multiple cores within one connection? |
Previous Message | Tom Lane | 2008-10-01 13:21:45 | Re: Index question regarding numeric operators |