From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Chris Hoover <revoohc(at)gmail(dot)com> |
Cc: | PGSQL Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Benchmarking a large server |
Date: | 2011-05-09 20:50:51 |
Message-ID: | BANLkTikSZ=7h7UBgsg3Zpri7m85gz9FC0g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, May 9, 2011 at 3:32 PM, Chris Hoover <revoohc(at)gmail(dot)com> wrote:
> I've got a fun problem.
> My employer just purchased some new db servers that are very large. The
> specs on them are:
> 4 Intel X7550 CPU's (32 physical cores, HT turned off)
> 1 TB Ram
> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
> 3TB Sas Array (48 15K 146GB spindles)
my GOODNESS! :-D. I mean, just, wow.
> The issue we are running into is how do we benchmark this server,
> specifically, how do we get valid benchmarks for the Fusion IO card?
> Normally to eliminate the cache effect, you run iozone and other benchmark
> suites at 2x the ram. However, we can't do that due to 2TB > 1.3TB.
> So, does anyone have any suggestions/experiences in benchmarking storage
> when the storage is smaller then 2x memory?
hm, if it was me, I'd write a small C program that just jumped
directly on the device around and did random writes assuming it wasn't
formatted. For sequential read, just flush caches and dd the device
to /dev/null. Probably someone will suggest better tools though.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | David Boreham | 2011-05-09 20:59:01 | Re: Benchmarking a large server |
Previous Message | Merlin Moncure | 2011-05-09 20:41:07 | good performance benchmark |