From: | David Boreham <david_list(at)boreham(dot)org> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org >> PGSQL Performance" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Benchmarking a large server |
Date: | 2011-05-10 00:38:29 |
Message-ID: | 4DC88905.8030200@boreham.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 5/9/2011 6:32 PM, Craig James wrote:
> Maybe this is a dumb question, but why do you care? If you have 1TB
> RAM and just a little more actual disk space, it seems like your
> database will always be cached in memory anyway. If you "eliminate
> the cach effect," won't the benchmark actually give you the wrong
> real-life results?
The time it takes to populate the cache from a cold start might be
important.
Also, if it were me, I'd be wanting to check for weird performance
behavior at this memory scale.
I've seen cases in the past where the VM subsystem went bananas because
the designers
and testers of its algorithms never considered the physical memory size
we deployed.
How many times was the kernel tested with this much memory, for example
? (never??)
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2011-05-10 00:45:41 | Re: Benchmarking a large server |
Previous Message | Craig James | 2011-05-10 00:32:19 | Re: Benchmarking a large server |