From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Rahila Syed <rahilasyed90(at)gmail(dot)com>, Andres Freund <andres(at)2ndquadrant(dot)com>, Abhijit Menon-Sen <ams(at)2ndquadrant(dot)com>, "Rahila Syed" <rahilasyed(dot)90(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [REVIEW] Re: Compression of full-page-writes |
Date: | 2014-09-12 19:38:01 |
Message-ID: | 54134B99.6030806@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 09/02/2014 09:52 AM, Fujii Masao wrote:
> [RESULT]
> Throughput in the benchmark.
>
> Multiple Single
> off 2162.6 2164.5
> on 891.8 895.6
> pglz 1037.2 1042.3
> lz4 1084.7 1091.8
> snappy 1058.4 1073.3
Most of the CPU overhead of writing full pages is because of CRC
calculation. Compression helps because then you have less data to CRC.
It's worth noting that there are faster CRC implementations out there
than what we use. The Slicing-by-4 algorithm was discussed years ago,
but was not deemed worth it back then IIRC because we typically
calculate CRC over very small chunks of data, and the benefit of
Slicing-by-4 and many other algorithms only show up when you work on
larger chunks. But a full-page image is probably large enough to benefit.
What I'm trying to say is that this should be compared with the idea of
just switching the CRC implementation. That would make the 'on' case
faster, and and the benefit of compression smaller. I wouldn't be
surprised if it made the 'on' case faster than compressed cases.
I don't mean that we should abandon this patch - compression makes the
WAL smaller which has all kinds of other benefits, even if it makes the
raw TPS throughput of the system worse. But I'm just saying that these
TPS comparisons should be taken with a grain of salt. We probably should
consider switching to a faster CRC algorithm again, regardless of what
we do with compression.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2014-09-12 19:39:04 | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Previous Message | Robert Haas | 2014-09-12 19:28:15 | Re: pgbench throttling latency limit |