From: | Едигарьев, Иван Григорьевич <edigaryev(dot)ig(at)phystech(dot)edu> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, garsthe1st(at)gmail(dot)com, dxahtepb(at)gmail(dot)com, geymer_98(at)mail(dot)ru, dafi913(at)yandex(dot)ru, Benjamin Manes <ben(dot)manes(at)gmail(dot)com> |
Subject: | Re: [Patch][WiP] Tweaked LRU for shared buffers |
Date: | 2019-02-17 13:14:30 |
Message-ID: | CAGPXO-tO849XP=12i8E6qEyxCt1x1+kx0a2uOSTAJA5mzfu2jA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi there. I was responsible for the benchmarks, and I would be glad to
make clear that part for you.
On Sat, 16 Feb 2019 at 02:30, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> Interesting. Where do these numbers (5/8 and 1/8) come from?
The first number came from MySQL realization of LRU algorithm
<https://dev.mysql.com/doc/refman/8.0/en/innodb-buffer-pool.html>
and the second from simple tuning, we've tried to change 1/8 a little,
but it didn't change metrics significantly.
> That TPS chart looks a bit ... wild. How come the master jumps so much
> up and down? That's a bit suspicious, IMHO.
Yes, it is. It would be great if someone will try to reproduce those results.
> How do I reproduce this benchmark? I'm aware of pg_ycsb, but maybe
> you've used some other tools?
Yes, we used pg_ycsb, but nothing more than that and pgbench, it's
just maybe too simple. I attach the example of the sh script that has
been used to generate database and measure each point on the chart.
Build was generated without any additional debug flags in
configuration. Database was mad by initdb with --data-checksums
enabled and generated by initialization step in pgbench with
--scale=11000.
> Also, have you tried some other benchmarks (like, regular TPC-B as
> implemented by pgbench, or read-only pgbench)? We need such benchmarks
> with a range of access patterns to check for regressions.
Yes, we tried all builtin pgbench benchmarks and YCSB-A,B,C from
pg_ycsb with unoform and zipfian distribution. I also attach some
other charts that we did, they are not as statistically significant as
could because we used less time in them but hope that it will help.
> BTW what do you mean by "sampling"?
I meant that we measure tps and hit rate on several virtual machines
for our and master build in order to neglect the influence that came
from the difference between them.
> > We used this config: [2]
> >
>
> That's only half the information - it doesn't say how many clients were
> running the benchmark etc.
Yes, sorry for that missing, we've had virtual machines with
configuration mentioned in the initial letter, with 16 jobs and 16
clients in pgbench configuration.
[0] Scripts https://yadi.sk/d/PHICP0N6YrN5Cw
[1] Measurements for other workloads https://yadi.sk/d/6G0e09Drf0ygag
I will be looking forward if you have any other questions about
measurement or code. Please note me if you have them.
Best regards.
--
Ivan Edigaryev
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2019-02-17 13:44:41 | Re: Ryu floating point output patch |
Previous Message | Fabien COELHO | 2019-02-17 13:02:48 | Re: libpq host/hostaddr/conninfo inconsistencies |