From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> |
Cc: | "PostgreSQL-development" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Load Distributed Checkpoints test results |
Date: | 2007-06-13 14:28:29 |
Message-ID: | 873b0wylma.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> writes:
> The response time graphs show that the patch reduces the max (new-order)
> response times during checkpoints from ~40-60 s to ~15-20 s.
I think that's the headline number here. The worst-case response time is
reduced from about 60s to about 17s. That's pretty impressive on its own. It
would be worth knowing if that benefit goes away if we push the machine again
to the edge of its i/o bandwidth.
> The change in overall average response times is also very significant. 1.5s
> without patch, and ~0.3-0.4s with the patch for new-order transactions. That
> also means that we pass the TPC-C requirement that 90th percentile of response
> times must be < average.
Incidentally this is backwards. the 90th percentile response time must be
greater than the average response time for that transaction.
This isn't actually a very stringent test given that most of the data points
in the 90th percentile are actually substantially below the maximum. It's
quite possible to achieve it even with maximum response times above 60s.
However TPC-E has even more stringent requirements:
During Steady State the throughput of the SUT must be sustainable for the
remainder of a Business Day started at the beginning of the Steady State.
Some aspects of the benchmark implementation can result in rather
insignificant but frequent variations in throughput when computed over
somewhat shorter periods of time. To meet the sustainable throughput
requirement, the cumulative effect of these variations over one Business
Day must not exceed 2% of the Reported Throughput.
Comment 1: This requirement is met when the throughput computed over any
period of one hour, sliding over the Steady State by increments of ten
minutes, varies from the Reported Throughput by no more than 2%.
Some aspects of the benchmark implementation can result in rather
significant but sporadic variations in throughput when computed over some
much shorter periods of time. To meet the sustainable throughput
requirement, the cumulative effect of these variations over one Business
Day must not exceed 20% of the Reported Throughput.
Comment 2: This requirement is met when the throughput level computed over
any period of ten minutes, sliding over the Steady State by increments one
minute, varies from the Reported Throughput by no more than 20%.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff MacDonald | 2007-06-13 14:28:58 | Re: [HACKERS] Avoiding legal email signatures |
Previous Message | Tom Lane | 2007-06-13 14:26:03 | Re: Tom Lane's presentation on SERIALIZABLE etc? |