Re: Benchmarking a large server

From: Shaun Thomas <sthomas(at)peak6(dot)com>
To: Chris Hoover <revoohc(at)gmail(dot)com>
Cc: PGSQL Performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Benchmarking a large server
Date: 2011-05-09 21:01:26
Message-ID: 4DC85626.6010400@peak6.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 05/09/2011 03:32 PM, Chris Hoover wrote:

> So, does anyone have any suggestions/experiences in benchmarking storage
> when the storage is smaller then 2x memory?

We had a similar problem when benching our FusionIO setup. What I did
was write a script that cleared out the Linux system cache before every
iteration of our pgbench tests. You can do that easily with:

echo 3 > /proc/sys/vm/drop_caches

Executed as root.

Then we ran short (10, 20, 30, 40 clients, 10,000 transactions each)
pgbench tests, resetting the cache and the DB after every iteration. It
was all automated in a script, so it wasn't too much work.

We got (roughly) a 15x speed improvement over a 6x15k RPM RAID-10 setup
on the same server, with no other changes. This was definitely
corroborated after deployment, when our frequent periods of 100% disk IO
utilization vanished and were replaced by occasional 20-30% spikes. Even
that's an unfair comparison in favor of the RAID, because we added DRBD
to the mix because you can't share a PCI card between two servers.

If you do have two 1.3TB Duo cards in a 4x640GB RAID-10, you should get
even better read times than we did.

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd. | Suite 800 | Chicago IL, 60604
312-676-8870
sthomas(at)peak6(dot)com

______________________________________________

See http://www.peak6.com/email_disclaimer.php
for terms and conditions related to this email

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2011-05-09 21:11:55 Re: Benchmarking a large server
Previous Message Ben Chobot 2011-05-09 20:59:47 Re: Benchmarking a large server