From: | Mark Wong <markw(at)osdl(dot)org> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: 8.0beta5 results w/ dbt2 |
Date: | 2004-11-30 18:44:52 |
Message-ID: | 20041130104452.A15968@osdl.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Nov 30, 2004 at 02:00:29AM -0500, Greg Stark wrote:
> Mark Wong <markw(at)osdl(dot)org> writes:
>
> > I have some initial results using 8.0beta5 with our OLTP workload.
> > http://www.osdl.org/projects/dbt2dev/results/dev4-010/199/
> > throughput: 4076.97
>
> Do people really only look at the "throughput" numbers? Looking at those
> graphs it seems that while most of the OLTP transactions are fulfilled in
> subpar response times, there are still significant numbers that take as much
> as 30s to fulfil.
>
> Is this just a consequence of the type of queries being tested and the data
> distribution? Or is Postgres handling queries that could run consistently fast
> but for some reason generating large latencies sometimes?
>
> I'm concerned because in my experience with web sites, once the database
> responds slowly for even a small fraction of the requests, the web server
> falls behind in handling http requests and a catastrophic failure builds.
>
> It seems to me that reporting maximum, or at least the 95% confidence interval
> (95% of queries executed between 50ms-20s) would be more useful than an
> overall average.
>
> Personally I would be happier with an average of 200ms but an interval of
> 100-300ms than an average of 100ms but an interval of 50ms-20s. Consistency
> can be more important than sheer speed.
>
Looking at just the throughput number is oversimplying it a bit. The
scale factor (size of the database) limits what your maximum
throughput can be with constraints on think times (delays between
transaction requests) and the number of terminals simulated, which is
also dictated by the size of the database. So given the throughput
with a scale factor (600 in these tests) you can infer whether or not
the response times are reasonable or not. At the 600 warehouse scale
factor, we could theoretically hit about 7200 new-order transactions
per minute. The math is roughly 12 * warehouses.
I do agree that reporting max response times and a confidence
interval (I have been meaning to report a 90th percentile number)
would be informative in addition to a mean. Instead I included the
distribution charts in the mean time...
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | David Parker | 2004-11-30 18:54:38 | Re: Increasing the length of |
Previous Message | Bruce Momjian | 2004-11-30 18:20:18 | Re: Increasing the length of |