From: | Steven Bradley <sbradley(at)llnl(dot)gov> |
---|---|
To: | pgsql-interfaces(at)postgresql(dot)org |
Subject: | Performance |
Date: | 1999-06-23 22:05:09 |
Message-ID: | 3.0.5.32.19990623150509.0092e990@poptop.llnl.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
I'm having some problems achieving adequate performance from Postgres for a
real-time event logging application. The way I'm interfacing to the
database may be the problem:
I have simplified the problem down to a single (non-indexed) table with
about a half-dozen columns (int4, timestamp, varchar, etc.) I wrote a
quick and dirty C program which uses the libpq interface to INSERT records
into the table in real-time. The best performance I could achieve was on
the order of 15 inserts per second. What I need is something much closer
to 100 inserts per second.
I wanted to use a prepared SQL statement, but it turns out that Postgres
runs the query through the parser-planner-executor cycle on each iteration.
There is no way to prevent this.
The next thing I though of doing was to "bulk load" several records in one
INSERT through the use of array processing. Do any of the Postgres
interfaces support this? (by arrays, I don't mean array columns in the
table).
I'm currently running Postgres 6.4.2. I've heard that 6.5 has improved
performance; does anyone have any idea what the performance improvement is
like?
Is it unrealistic to expect Postgres to insert on the order of 100 records
per second on a Pentium 400 MHz/SCSI class machine running Linux? (Solaris
on a comparable platform has about 1/2 the performance)
Thanks in advance...
Steven Bradley
Lawrence Livermore National Laboratory
sbradley(at)llnl(dot)gov
From | Date | Subject | |
---|---|---|---|
Next Message | Phil Moors | 1999-06-23 22:52:05 | ECPG fetch broken after upgrade to 6.5 |
Previous Message | robert_hiltibidal_at_cms08405 | 1999-06-23 20:14:50 | Perl module |