From: | Andrey Borodin <x4mmm(at)yandex-team(dot)ru> |
---|---|
To: | Michael Paquier <michael(at)paquier(dot)xyz> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Vladimir Leskov <vladimirlesk(at)yandex-team(dot)ru> |
Subject: | Re: pglz performance |
Date: | 2019-06-27 18:33:16 |
Message-ID: | 169163A8-C96F-4DBE-A062-7D1CECBE9E5D@yandex-team.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> 13 мая 2019 г., в 12:14, Michael Paquier <michael(at)paquier(dot)xyz> написал(а):
>
> Decompression can matter a lot for mostly-read workloads and
> compression can become a bottleneck for heavy-insert loads, so
> improving compression or decompression should be two separate
> problems, not two problems linked. Any improvement in one or the
> other, or even both, is nice to have.
Here's patch hacked by Vladimir for compression.
Key differences (as far as I see, maybe Vladimir will post more complete list of optimizations):
1. Use functions instead of macro-functions: not surprisingly it's easier to optimize them and provide less constraints for compiler to optimize.
2. More compact hash table: use indexes instead of pointers.
3. More robust segment comparison: like memcmp, but return index of first different byte
In weighted mix of different data (same as for compression), overall speedup is x1.43 on my machine.
Current implementation is integrated into test_pglz suit for benchmarking purposes[0].
Best regards, Andrey Borodin.
Attachment | Content-Type | Size |
---|---|---|
0001-Reorganize-pglz-compression-code.patch | application/octet-stream | 24.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Cramer | 2019-06-27 18:37:58 | Re: Fix doc bug in logical replication. |
Previous Message | Julien Rouhaud | 2019-06-27 18:32:56 | Re: Hypothetical indexes using BRIN broken since pg10 |