Re: Disk Benchmarking Question

From: Dave Stibrany <dstibrany(at)gmail(dot)com>
To: Mike Sofen <msofen(at)runbox(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Disk Benchmarking Question
Date: 2016-03-18 14:48:06
Message-ID: CAK17Jm=znGnmb2Kdmd2DOaOR46SAmn8G8jV-55Ayhb3da17rMQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hey Mike,

Thanks for the response. I think where I'm confused is that I thought
vendor specified MBps was an estimate of sequential read/write speed.
Therefore if you're in RAID10, you'd have 4x the sequential read speed and
2x the sequential write speed. Am I misunderstanding something?

Also, when you mention that MBPs is the capacity of the interface, what do
you mean exactly. I've been taking interface speed to be the electronic
transfer speed, not the speed from the actual physical medium, and more in
the 6-12 gigabit range.

Please let me know if I'm way off on any of this, I'm hoping to have my
mental model updated.

Thanks!

Dave

On Thu, Mar 17, 2016 at 5:11 PM, Mike Sofen <msofen(at)runbox(dot)com> wrote:

> Hi Dave,
>
>
>
> Database disk performance has to take into account IOPs, and IMO, over
> MBPs, since it’s the ability of the disk subsystem to write lots of little
> bits (usually) versus writing giant globs, especially in direct attached
> storage (like yours, versus a SAN). Most db disk benchmarks revolve around
> IOPs…and this is where SSDs utterly crush spinning disks.
>
>
>
> You can get maybe 200 IOPs out of each disk, you have 4 in raid 10 so you
> get a whopping 400 IOPs. A single quality SSD (like the Samsung 850 pro)
> will support a minimum of 40k IOPs on reads and 80k IOPs on writes. That’s
> why SSDs are eliminating spinning disks when performance is critical and
> budget allows.
>
>
>
> Back to your question – the MBPs is the capacity of interface, so it makes
> sense that it’s the same for both reads and writes. The perc raid
> controller will be saving your bacon on writes, with 2gb cache (assuming
> it’s caching writes), so it becomes the equivalent of an SSD up to the
> capacity limit of the write cache. With only 400 iops of write speed, with
> a busy server you can easily saturate the cache and then your system will
> drop to a crawl.
>
>
>
> If I didn’t answer the intent of your question, feel free to clarify for
> me.
>
>
>
> Mike
>
>
>
> *From:* pgsql-performance-owner(at)postgresql(dot)org [mailto:
> pgsql-performance-owner(at)postgresql(dot)org] *On Behalf Of *Dave Stibrany
> *Sent:* Thursday, March 17, 2016 1:45 PM
> *To:* pgsql-performance(at)postgresql(dot)org
> *Subject:* [PERFORM] Disk Benchmarking Question
>
>
>
> I'm pretty new to benchmarking hard disks and I'm looking for some advice
> on interpreting the results of some basic tests.
>
>
>
> The server is:
>
> - Dell PowerEdge R430
>
> - 1 x Intel Xeon E5-2620 2.4GHz
>
> - 32 GB RAM
>
> - 4 x 600GB 10k SAS Seagate ST600MM0088 in RAID 10
>
> - PERC H730P Raid Controller with 2GB cache in write back mode.
>
>
>
> The OS is Ubuntu 14.04, I'm using LVM and I have an ext4 volume for /, and
> an xfs volume for PGDATA.
>
>
>
> I ran some dd and bonnie++ tests and I'm a bit confused by the numbers. I
> ran 'bonnie++ -n0 -f' on the root volume.
>
>
>
> Here's a link to the bonnie test results
>
> https://www.dropbox.com/s/pwe2g5ht9fpjl2j/bonnie.today.html?dl=0
>
>
>
> The vendor stats say sustained throughput of 215 to 108 MBps, so I guess
> I'd expect around 400-800 MBps read and 200-400 MBps write. In any case,
> I'm pretty confused as to why the read and write sequential speeds are
> almost identical. Does this look wrong?
>
>
>
> Thanks,
>
>
>
> Dave
>
>
>
>
>
>
>

--
*THIS IS A TEST*

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message avi Singh 2016-03-18 20:09:56 grant select on pg_stat_activity
Previous Message Jan Bauer Nielsen 2016-03-18 13:26:14 Performance decline maybe caused by multi-column index?