From: | Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: some longer, larger pgbench tests with various performance-related patches |
Date: | 2012-01-25 03:53:50 |
Message-ID: | CABOikdOE9bJTNwURBRCfHimWN5LeOy=04XqnZMZTBrxYBMQM_A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Jan 25, 2012 at 2:23 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Early yesterday morning, I was able to use Nate Boley's test machine
> do a single 30-minute pgbench run at scale factor 300 using a variety
> of trees built with various patches, and with the -l option added to
> track latency on a per-transaction basis. All tests were done using
> 32 clients and permanent tables. The configuration was otherwise
> identical to that described here:
>
> http://archives.postgresql.org/message-id/CA+TgmoboYJurJEOB22Wp9RECMSEYGNyHDVFv5yisvERqFw=6dw@mail.gmail.com
>
> By doing this, I hoped to get a better understanding of (1) the
> effects of a scale factor too large to fit in shared_buffers, (2) what
> happens on a longer test run, and (3) how response time varies
> throughout the test. First, here are the raw tps numbers:
>
> background-clean-slru-v2: tps = 2027.282539 (including connections establishing)
> buffreelistlock-reduction-v1: tps = 2625.155348 (including connections
> establishing)
> buffreelistlock-reduction-v1-freelist-ok-v2: tps = 2468.638149
> (including connections establishing)
> freelist-ok-v2: tps = 2467.065010 (including connections establishing)
> group-commit-2012-01-21: tps = 2205.128609 (including connections establishing)
> master: tps = 2200.848350 (including connections establishing)
> removebufmgrfreelist-v1: tps = 2679.453056 (including connections establishing)
> xloginsert-scale-6: tps = 3675.312202 (including connections establishing)
>
> Obviously these numbers are fairly noisy, especially since this is
> just one run, so the increases and decreases might not be all that
> meaningful. Time permitting, I'll try to run some more tests to get
> my hands around that situation a little better,
>
This is nice. I am sure long running tests will point out many more
issues. If we are doing these tests, it might be more effective if we
run even longer runs, such as to get at least 3-4
checkpoints/vacuum/analyze (and other such events which can impact
final numbers either way) per test. Otherwise, one patch may stood out
if it avoids, say one checkpoint.
It would definitely help to log the checkpoint,
auto-vacuum/auto-analyze details and plot them on the graph to see if
the drop in performance has anything to do with these activities. It
might also be a good idea to collect pg statistics such as relation
sizes at the end of the run.
Thanks,
Pavan
--
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Page | 2012-01-25 06:13:54 | Re: PgNext: CFP |
Previous Message | Simon Riggs | 2012-01-25 03:50:22 | Re: Group commit, revised |