From: | "Lim Berger" <straightfwd007(at)gmail(dot)com> |
---|---|
To: | "Andrej Ricnik-Bay" <andrej(dot)groups(at)gmail(dot)com> |
Cc: | "Postgresql General List" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Postgresql INSERT speed (how to improve performance)? |
Date: | 2007-08-14 03:06:47 |
Message-ID: | 69d2538f0708132006u18bc948ap27466282cd333d09@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 8/14/07, Andrej Ricnik-Bay <andrej(dot)groups(at)gmail(dot)com> wrote:
> On 8/14/07, Lim Berger <straightfwd007(at)gmail(dot)com> wrote:
>
> > INSERTing into MySQL takes 0.0001 seconds per insert query.
> > INSERTing into PgSQL takes 0.871 seconds per (much smaller) insert query.
> >
> > What can I do to improve this performance? What could be going wrong
> > to elicit such poor insertion performance from Postgresql?
> MySQL might not be writing the data straight out
> to disk ... just a guess.
>
The MYSQL table is MYISAM, yes, so no transaction support. I would
like PgSQL to do the same. These are not a batch of queries so I
cannot bundle them inside a transaction. These are individual
submissions from the web.
To make PG behave in the above manner, I have the following in my conf:
commit_delay = 0
fsync = on
wal_buffers=64
checkpoint_segments=64
checkpoint_timeout=900
Am I missing something? (I may well be). Would explicitly issuing a
"COMMIT" command help at all? Should I do the following:
BEGIN TRANSACTION;
INSERT INTO...;
COMMIT;
Would this be faster?
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-08-14 03:13:39 | Re: "Out of memory" errors.. |
Previous Message | Greg Smith | 2007-08-14 03:02:39 | Re: Index not being used |