From: | Craig James <craig_james(at)emolecules(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Benchmarking a large server |
Date: | 2011-05-10 00:32:19 |
Message-ID: | 4DC88793.4000708@emolecules.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
2011/5/9 Chris Hoover<revoohc(at)gmail(dot)com>:
> I've got a fun problem.
> My employer just purchased some new db servers that are very large. The
> specs on them are:
> 4 Intel X7550 CPU's (32 physical cores, HT turned off)
> 1 TB Ram
> 1.3 TB Fusion IO (2 1.3 TB Fusion IO Duo cards in a raid 10)
> 3TB Sas Array (48 15K 146GB spindles)
> The issue we are running into is how do we benchmark this server,
> specifically, how do we get valid benchmarks for the Fusion IO card?
> Normally to eliminate the cache effect, you run iozone and other benchmark
> suites at 2x the ram. However, we can't do that due to 2TB> 1.3TB.
> So, does anyone have any suggestions/experiences in benchmarking storage
> when the storage is smaller then 2x memory?
Maybe this is a dumb question, but why do you care? If you have 1TB RAM and just a little more actual disk space, it seems like your database will always be cached in memory anyway. If you "eliminate the cach effect," won't the benchmark actually give you the wrong real-life results?
Craig
From | Date | Subject | |
---|---|---|---|
Next Message | David Boreham | 2011-05-10 00:38:29 | Re: Benchmarking a large server |
Previous Message | Cédric Villemain | 2011-05-09 23:52:01 | Re: Benchmarking a large server |