From: | Sezai YILMAZ <sezai(dot)yilmaz(at)pro-g(dot)com(dot)tr> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | PostgreSQL insert speed tests |
Date: | 2004-02-27 12:15:10 |
Message-ID: | 403F34CE.5000702@pro-g.com.tr |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hello
I need high throughput while inserting into PostgreSQL. Because of that I
did some PostgreSQL insert performance tests.
------------------------------------------------------------
-- Test schema
create table logs (
logid serial primary key,
ctime integer not null,
stime integer not null,
itime integer not null,
agentid integer not null,
subagentid integer not null,
ownerid integer not null,
hostid integer not null,
appname varchar(64) default null,
logbody varchar(1024) not null
);
create index ctime_ndx on logs using btree (ctime);
create index stime_ndx on logs using btree (stime);
create index itime_ndx on logs using btree (itime);
create index agentid_ndx on logs using hash (agentid);
create index ownerid_ndx on logs using hash (ownerid);
create index hostid_ndx on logs using hash (hostid);
------------------------------------------------------------
Test Hardware:
IBM Thinkpad R40
CPU: Pentium 4 Mobile 1993 Mhz (full powered)
RAM: 512 MB
OS: GNU/Linux, Fedora Core 1, kernel 2.4.24
A test program developed with libpq inserts 200.000 rows into table
logs. Insertions are made with 100 row per transaction (total 2.000
transactions).
Some parameter changes from postgresql.conf file follows:
----------------------------------------------------------------
shared_buffers = 2048 # min max_connections*2 or 16, 8KB each
max_fsm_relations = 20000 # min 10, fsm is free space map, ~40 bytes
max_fsm_pages = 200000 # min 1000, fsm is free space map, ~6 bytes
max_locks_per_transaction = 256 # min 10
wal_buffers = 64 # min 4, typically 8KB each
sort_mem = 32768 # min 64, size in KB
vacuum_mem = 16384 # min 1024, size in KB
checkpoint_segments = 6 # in logfile segments, min 1, 16MB each
checkpoint_timeout = 900 # range 30-3600, in seconds
fsync = true
wal_sync_method = fsync # the default varies across platforms:
enable_seqscan = true
enable_indexscan = true
enable_tidscan = true
enable_sort = true
enable_nestloop = true
enable_mergejoin = true
enable_hashjoin = true
effective_cache_size = 2000 # typically 8KB each
geqo = true
geqo_selection_bias = 2.0 # range 1.5-2.0
geqo_threshold = 11
geqo_pool_size = 0 # default based on tables in statement,
# range 128-1024
geqo_effort = 1
geqo_generations = 0
geqo_random_seed = -1 # auto-compute seed
----------------------------------------------------------------
The test was made with both of PostgreSQL 7.3.4 and PostgreSQL 7.4.1 (the
test program was recompiled during version changes).
The results are below (average inserted rows per second).
speed for speed for
# of EXISTING RECORDS PostgreSQL 7.3.4 PostgreSQL 7.4.1
=========================================================================
0 initial records 1086 rows/s 1324 rows/s
200.000 initial records 781 rows/s 893 rows/s
400.000 initial records 576 rows/s 213 rows/s
600.000 initial records 419 rows/s 200 rows/s
800.000 initial records 408 rows/s not tested because of bad
results
When the logs table reconstructed with only one index (primary key) then
2941 rows/s speed is reached. But I need all the seven indexes.
The question is why the PostgreSQL 7.4.1 is so slow under heavy work?
Is there a way to speed up inserts without eliminating indexes?
What about concurrent inserts (cocurrent spare test program execution)
into the same table? It did not work.
-sezai
From | Date | Subject | |
---|---|---|---|
Next Message | cnliou | 2004-02-27 12:27:36 | Re: Case of strings |
Previous Message | Simon Windsor | 2004-02-27 12:06:58 | Case of strings |