From: | Greg Smith <gsmith(at)gregsmith(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Feature Request --- was: PostgreSQL Performance Tuning |
Date: | 2007-05-04 04:33:29 |
Message-ID: | Pine.GSO.4.64.0705032351240.14661@westnet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
On Thu, 3 May 2007, Josh Berkus wrote:
> So any attempt to determine "how fast" a CPU is, even on a 1-5 scale,
> requires matching against a database of regexes which would have to be
> kept updated.
This comment, along with the subsequent commentary today going far astray
into CPU measurement land, serves as a perfect example to demonstrate why
I advocate attacking this from the perspective that assumes there is
already a database around we can query.
We don't have to care how fast the CPU is in any real terms; all we need
to know is how many of them are (which as you point out is relatively easy
to find), and approximately how fast each one of them can run PostgreSQL.
Here the first solution to this problem I came up with in one minute of
R&D:
-bash-3.00$ psql
postgres=# \timing
Timing is on.
postgres=# select count(*) from generate_series(1,100000,1);
count
--------
100000
(1 row)
Time: 106.535 ms
There you go, a completely cross-platform answer. You should run the
statement twice and only use the second result for better consistancy. I
ran this on all the sytems I was around today and got these results:
P4 2.4GHz 107ms
Xeon 3GHz 100ms
Opteron 275 65ms
Athlon X2 4600 61ms
For comparison sake, these numbers are more useful at predicting actual
application performance than Linux's bogomips number, which completely
reverses the relative performance of the Intel vs. AMD chips in this set
from the reality of how well they run Postgres.
My philosophy in this area is that if you can measure something
performance-related with reasonable accuracy, don't even try to estimate
it instead. All you have to do is follow some of the downright bizzare
dd/bonnie++ results people post here to realize that there can be a vast
difference between the performance you'd expect given a particular
hardware class and what you actually get.
While I'm ranting here, I should mention that I also sigh every time I see
people suggest we should ask the user how big their database is. The kind
of newbie user people keep talking about helping has *no idea whatsoever*
how big the data actually is after it gets into the database and all the
indexes are built. But if you tell someone "right now this database has 1
million rows and takes up 800MB; what multiple of its current size do you
expect it to grow to?", now that's something people can work with.
--
* Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-05-04 04:53:31 | Re: cant get pg_dump/pg_restore to behave |
Previous Message | Mageshwaran | 2007-05-04 04:20:43 | Regarding autocomplete |
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Smith | 2007-05-04 04:53:55 | Re: pg_stat_* collection |
Previous Message | Merlin Moncure | 2007-05-04 02:37:57 | Re: Query performance problems with partitioned tables |