From: | Mike Broers <mbroers(at)gmail(dot)com> |
---|---|
To: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
Subject: | driving postgres to achieve benchmark results similar to bonnie++ |
Date: | 2016-05-10 15:48:08 |
Message-ID: | CAB9893gs_pWP7zpD1HDw0+K41vHwPx72T2DsiTa5+zhqV3wYsg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
I'm having trouble getting postgres to drive enough disk activity to get
even close to the disk benchmarking I'm getting with bonnie++. We have
SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running
on centos 6.
bonnie++ -n 0 -f -b is the command im running, pointing to either primary
data or xlog location Im consistently seeing numbers like this:
Version 1.03e ------Sequential Output------ --Sequential Input-
--Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
23808M 786274 92 157465 29 316097 17 5751 16
So during bonnie++ tests I've confirmed in our monitoring write peaks at
700/800 MB/sec and read peaks around 280/300 MB/sec.
We have 12GB RAM on the server, when I run pgbench with a scale that sets
the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec
writes and 80MB/sec. I'm running with different options such unlogged
tables and logged tables, prepared transactions or not, and transaction
counts between 1000 and 40000.
I thought a parallel pg_dump / restore might also drive disk but that
performance doesnt drive disk throughput either, topping out around
75MB/sec read. Nightly vacuums also seem to peak below 110MB/sec reads as
well.
Here are the nondefault pg settings:
max_connections = 1024
shared_buffers = 1024MB
wal_buffers = 16MB
checkpoint_completion_target = '.9'
archive_mode = on
random_page_cost = '1.5'
maintenance_work_mem = 512MB
work_mem = 64MB
max_wal_senders = 5
checkpoint_timeout = 10min
effective_io_concurrency = 4
effective_cache_size = 8GB
wal_keep_segments = 512
wal_level = hot_standby
synchronous_commit = off
Any idea of if/why postgres might be bottlenecking disk throughput? Or if
there is a method for testing to achieve something closer the bonnie++
levels from within postgres that I am missing? I'm guessing I'm just not
driving enough activity to push it to the limit but I'm not sure of a
straightforward method to verify this.
Thanks,
Mike
From | Date | Subject | |
---|---|---|---|
Next Message | John Scalia | 2016-05-10 17:01:37 | Re: driving postgres to achieve benchmark results similar to bonnie++ |
Previous Message | Bob Lunney | 2016-05-10 01:21:31 | Re: Autovacuum of pg_database |