Re: driving postgres to achieve benchmark results similar to bonnie++

From: Mike Broers <mbroers(at)gmail(dot)com>
To: John Scalia <jayknowsunix(at)gmail(dot)com>
Cc: "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: driving postgres to achieve benchmark results similar to bonnie++
Date: 2016-05-10 18:27:27
Message-ID: CAB9893hchqtAKnOtSQR4Wu=bcdxjeV9OGmDk9tiYx2-B=3_4aQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I've made those changes and seen negligible improvements (increase in reads
like 10MB/sec from 99MB/sec to 112MB/sec, so those didnt hurt but they
didnt exactly unleash huge gains.

Any idea of a way I can really push the postgres server to see numbers
closer to the bonnie++ results or provide an explanation of the ceiling?

On Tue, May 10, 2016 at 12:08 PM, Mike Broers <mbroers(at)gmail(dot)com> wrote:

> Thanks for the feedback, I'll update those configs, run some more tests,
> and follow up.
>
>
>
> On Tue, May 10, 2016 at 12:01 PM, John Scalia <jayknowsunix(at)gmail(dot)com>
> wrote:
>
>>
>>
>> Sent from my iPad
>>
>> On May 10, 2016, at 10:48 AM, Mike Broers <mbroers(at)gmail(dot)com> wrote:
>>
>> I'm having trouble getting postgres to drive enough disk activity to get
>> even close to the disk benchmarking I'm getting with bonnie++. We have
>> SSD SAN and the xlog is on its own ssd volume as well, postgres 9.5 running
>> on centos 6.
>>
>> bonnie++ -n 0 -f -b is the command im running, pointing to either primary
>> data or xlog location Im consistently seeing numbers like this:
>>
>> Version 1.03e ------Sequential Output------ --Sequential Input-
>> --Random-
>>
>> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
>> --Seeks--
>>
>> Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
>>
>> 23808M 786274 92 157465 29 316097 17 5751 16
>>
>> So during bonnie++ tests I've confirmed in our monitoring write peaks at
>> 700/800 MB/sec and read peaks around 280/300 MB/sec.
>>
>> We have 12GB RAM on the server, when I run pgbench with a scale that sets
>> the pgbench database in the realm of 18GB - 25GB I barely break 110MB/sec
>> writes and 80MB/sec. I'm running with different options such unlogged
>> tables and logged tables, prepared transactions or not, and transaction
>> counts between 1000 and 40000.
>>
>> I thought a parallel pg_dump / restore might also drive disk but that
>> performance doesnt drive disk throughput either, topping out around
>> 75MB/sec read. Nightly vacuums also seem to peak below 110MB/sec reads as
>> well.
>>
>> Here are the nondefault pg settings:
>>
>> max_connections = 1024
>> shared_buffers = 1024MB
>> wal_buffers = 16MB
>> checkpoint_completion_target = '.9'
>> archive_mode = on
>> random_page_cost = '1.5'
>> maintenance_work_mem = 512MB
>> work_mem = 64MB
>> max_wal_senders = 5
>> checkpoint_timeout = 10min
>> effective_io_concurrency = 4
>> effective_cache_size = 8GB
>> wal_keep_segments = 512
>> wal_level = hot_standby
>> synchronous_commit = off
>>
>> Any idea of if/why postgres might be bottlenecking disk throughput? Or
>> if there is a method for testing to achieve something closer the bonnie++
>> levels from within postgres that I am missing? I'm guessing I'm just not
>> driving enough activity to push it to the limit but I'm not sure of a
>> straightforward method to verify this.
>>
>> Thanks,
>>
>> Mike
>>
>> Well, I'm no expert with Bonnie, but several of your PostgreSQL settings
>> look incorrect according to the system config you provided. With 12Gb of
>> RAM, shared_buffers should probably be closer to at least 3092Mb. That is
>> if you follow the general suggestion for having that at 1/4 of your
>> available RAM. Also, your max_connections settings looks awfully high. Do
>> you really need 1024 connections? Suggest, if so, that you look into a
>> connection pooling software, like pgpool-II or something.
>>
>> With an SSD drive, I also doubt your random page cost is that high. With
>> that setting, you may be forcing a lot of queries to use sequential access,
>> not random.
>>
>> Finally, increase the two work mem settings. I can't recommend any
>> settings without knowing what kind of queries you're running.
>> --
>> Jay
>>
>
>

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Scott Whitney 2016-05-10 18:44:21 Re: driving postgres to achieve benchmark results similar to bonnie++
Previous Message Mike Broers 2016-05-10 17:08:50 Re: driving postgres to achieve benchmark results similar to bonnie++