From: | James Cloos <cloos(at)jhcloos(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: CPUs for new databases |
Date: | 2010-10-26 23:45:12 |
Message-ID: | m3ocagplgf.fsf@jhcloos.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>>>>> "JB" == Josh Berkus <josh(at)agliodbs(dot)com> writes:
JB> In a general workload, fewer faster cores are better. We do not scale
JB> perfectly across cores. The only case where that's not true is
JB> maintaining lots of idle connections, and that's really better dealt
JB> with in software.
I've found that ram speed is the most limiting factor I've run into for
those cases where the db fits in RAM. The less efficient lookups run
just as fast when the CPU is in powersving mode as in performance, which
implies that the cores are mostly waiting on RAM (cache or main).
I suspect cache size and ram speed will be the most important factors
until the point where disk i/o speed and capacity take over.
I'm sure some db applications run computaionally expensive queries on
the server, but most queries seem light on computaion and heavy on
gathering and comparing.
It can help to use recent versions of gcc with -march=native. And
recent versions of glibc offer improved string ops on recent hardware.
-JimC
--
James Cloos <cloos(at)jhcloos(dot)com> OpenPGP: 1024D/ED7DAEA6
From | Date | Subject | |
---|---|---|---|
Next Message | Ivan Voras | 2010-10-27 00:18:43 | Re: CPUs for new databases |
Previous Message | Ozer, Pam | 2010-10-26 23:27:28 | Slow Query- Bad Row Estimate |