From: | david(at)lang(dot)hm |
---|---|
To: | David Boreham <david_list(at)boreham(dot)org> |
Cc: | "pgsql-performance(at)postgresql(dot)org >> PGSQL Performance" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Benchmarking a large server |
Date: | 2011-05-10 00:46:14 |
Message-ID: | alpine.DEB.2.00.1105091745010.25291@asgard.lang.hm |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 9 May 2011, David Boreham wrote:
> On 5/9/2011 6:32 PM, Craig James wrote:
>> Maybe this is a dumb question, but why do you care? If you have 1TB RAM
>> and just a little more actual disk space, it seems like your database will
>> always be cached in memory anyway. If you "eliminate the cach effect,"
>> won't the benchmark actually give you the wrong real-life results?
>
> The time it takes to populate the cache from a cold start might be important.
you may also have other processes that will be contending with the disk
buffers for memory (for that matter, postgres may use a significant amount
of that memory as it's producing it's results)
David Lang
> Also, if it were me, I'd be wanting to check for weird performance behavior
> at this memory scale.
> I've seen cases in the past where the VM subsystem went bananas because the
> designers
> and testers of its algorithms never considered the physical memory size we
> deployed.
>
> How many times was the kernel tested with this much memory, for example ?
> (never??)
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Aren Cambre | 2011-05-10 02:12:20 | Re: Postgres refusing to use >1 core |
Previous Message | Greg Smith | 2011-05-10 00:45:41 | Re: Benchmarking a large server |