From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: 8.0beta5 results w/ dbt2 |
Date: | 2004-11-30 07:00:29 |
Message-ID: | 87is7n95yq.fsf@stark.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Mark Wong <markw(at)osdl(dot)org> writes:
> I have some initial results using 8.0beta5 with our OLTP workload.
> http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
> throughput: 4076.97
Do people really only look at the "throughput" numbers? Looking at those
graphs it seems that while most of the OLTP transactions are fulfilled in
subpar response times, there are still significant numbers that take as much
as 30s to fulfil.
Is this just a consequence of the type of queries being tested and the data
distribution? Or is Postgres handling queries that could run consistently fast
but for some reason generating large latencies sometimes?
I'm concerned because in my experience with web sites, once the database
responds slowly for even a small fraction of the requests, the web server
falls behind in handling http requests and a catastrophic failure builds.
It seems to me that reporting maximum, or at least the 95% confidence interval
(95% of queries executed between 50ms-20s) would be more useful than an
overall average.
Personally I would be happier with an average of 200ms but an interval of
100-300ms than an average of 100ms but an interval of 50ms-20s. Consistency
can be more important than sheer speed.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2004-11-30 07:12:10 | Re: [Testperf-general] Re: 8.0beta5 results w/ dbt2 |
Previous Message | Johan Wehtje | 2004-11-30 06:54:24 | Column n.nsptablespace does not exist error |