From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Alik Khilazhev <a(dot)khilazhev(at)postgrespro(dot)ru> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [WIP] Zipfian distribution in pgbench |
Date: | 2017-07-07 12:17:47 |
Message-ID: | CA+Tgmob6mrxyAfjiiOeb5+UjR7_3phbyYTxWZnwr28jVqDM_Dw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jul 7, 2017 at 3:45 AM, Alik Khilazhev
<a(dot)khilazhev(at)postgrespro(dot)ru> wrote:
> PostgreSQL shows very bad results in YCSB Workload A (50% SELECT and 50% UPDATE of random row by PK) on benchmarking with big number of clients using Zipfian distribution. MySQL also has decline but it is not significant as it is in PostgreSQL. MongoDB does not have decline at all.
How is that possible? In a Zipfian distribution, no matter how big
the table is, almost all of the updates will be concentrated on a
handful of rows - and updates to any given row are necessarily
serialized, or so I would think. Maybe MongoDB can be fast there
since there are no transactions, so it can just lock the row slam in
the new value and unlock the row, all (I suppose) without writing WAL
or doing anything hard. But MySQL is going to have to hold the row
lock until transaction commit just like we do, or so I would think.
Is it just that their row locking is way faster than ours?
I'm more curious about why we're performing badly than I am about a
general-purpose random_zipfian function. :-)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2017-07-07 12:20:37 | Re: New partitioning - some feedback |
Previous Message | Robert Haas | 2017-07-07 12:11:30 | Re: New partitioning - some feedback |