Re: Benchmarking a large server

From: Greg Smith <greg(at)2ndquadrant(dot)com>
To: Craig James <craig_james(at)emolecules(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Benchmarking a large server
Date: 2011-05-10 00:45:41
Message-ID: 4DC88AB5.1030003@2ndquadrant.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Craig James wrote:
> Maybe this is a dumb question, but why do you care? If you have 1TB
> RAM and just a little more actual disk space, it seems like your
> database will always be cached in memory anyway. If you "eliminate
> the cach effect," won't the benchmark actually give you the wrong
> real-life results?

If you'd just spent what two FusionIO drives cost, you'd want to make
damn sure they worked as expected too. Also, if you look carefully,
there is more disk space than this on the server, just not on the SSDs.
It's possible this setup could end up with most of RAM filled with data
that's stored on the regular drives. In that case the random
performance of the busy SSD would be critical. It would likely take a
very bad set of disk layout choices for that to happen, but I could see
heavy sequential scans of tables in a data warehouse pushing in that
direction.

Isolating out the SSD performance without using the larger capacity of
the regular drives on the server is an excellent idea here, it's just
tricky to do.

--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.us
"PostgreSQL 9.0 High Performance": http://www.2ndQuadrant.com/books

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message david 2011-05-10 00:46:14 Re: Benchmarking a large server
Previous Message David Boreham 2011-05-10 00:38:29 Re: Benchmarking a large server