<div><div>Hi,</div><div> </div><div>I’ve run a couple of pgbenchmarks using this patch with odyssey connection pooler, with client-to-pooler ZSTD compression turned on.</div><div> </div><div>pgbench --builtin tpcb-like -t 75 --jobs=32 --client=1000</div><div> </div><div>CPU utilization chart of the configuration above:</div><div>https://storage.yandexcloud.net/usernamedt/odyssey-compression.png</div><div> </div><div>CPU overhead on average was about 10%.</div><div> </div><div>pgbench -i -s 1500</div><div> </div><div>CPU utilization chart of the configuration above:</div><div>https://storage.yandexcloud.net/usernamedt/odyssey-compression-i-s.png</div><div> </div><div>As you can see, there was not any noticeable difference in CPU utilization with ZSTD compression enabled or disabled.</div><div> </div><div>Regarding replication, I've made a couple of fixes for this patch, you can find them in this pull request https://github.com/postgrespro/libpq_compression/pull/3</div><div> </div><div>With these fixes applied, I've run some tests using this patch with streaming physical replication on some large clusters. Here is the difference of network usage on the replica with ZSTD replication compression enabled compared to the replica without replication compression:</div><div> </div><div>- on pgbench -i -s 1500 there was ~23x less network usage</div><div> </div><div>- on pgbench -T 300 --jobs=32 --client=640 there was ~4.5x less network usage</div><div> </div><div>- on pg_restore of the ~300 GB database there was ~5x less network usage</div><div> </div><div>To sum up, I think that the current version of the patch (with per-connection compression) is OK from the protocol point of view except for the compression initialization part. As discussed, we can either do initialization before the startup packet or move the compression to _pq_ parameter to avoid issues on older backends.</div><div> </div><div>Regarding switchable on the fly compression, although it introduces more flexibility, seems like that it will significantly increase the implementation complexity of both the frontend and backend. To support this approach in the future, maybe we should add something like the compression mode to protocol and name the current approach as “permanent” while reserving the “switchable” compression type for future implementation?</div><div> </div><div>Thanks,</div><div> </div><div>Daniil Zakhlystov</div></div><div> </div><div>06.11.2020, 11:58, "Andrey Borodin" <x4mmm(at)yandex-team(dot)ru>:</div><blockquote><p><br /> </p><blockquote> 6 нояб. 2020 г., в 00:22, Peter Eisentraut <<a href="mailto:peter(dot)eisentraut(at)enterprisedb(dot)com" rel="noopener noreferrer">peter(dot)eisentraut(at)enterprisedb(dot)com</a>> написал(а):<br /> <br /> On 2020-11-02 20:50, Andres Freund wrote:<blockquote> On 2020-10-31 22:25:36 +0500, Andrey Borodin wrote:<blockquote> But the price of compression is 1 cpu for 500MB/s (zstd). With a<br /> 20Gbps network adapters cost of recompressing all traffic is at most<br /> ~4 cores.</blockquote> It's not quite that simple, because presumably each connection is going<br /> to be handled by one core at a time in the pooler. So it's easy to slow<br /> down peak throughput if you also have to deal with TLS etc.</blockquote> <br /> Also, current deployments of connection poolers use rather small machine sizes. Telling users you need 4 more cores per instance now to decompress and recompress all the traffic doesn't seem very attractive. Also, it's not unheard of to have more than one layer of connection pooling. With that, this whole design sounds a bit like a heat-generation system. ;-)</blockquote><p><br />User should ensure good bandwidth between pooler and DB. At least they must be within one availability zone. This makes compression between pooler and DB unnecessary.<br />Cross-datacenter traffic is many times more expensive.<br /><br />I agree that switching between compression levels (including turning it off) seems like nice feature. But<br />1. Scope of its usefulness is an order of magnitude smaller than compression of the whole connection.<br />2. Protocol for this feature is significantly more complicated.<br />3. Restarted compression is much less efficient and effective.<br /><br />Can we design a protocol so that this feature may be implemented in future, currently focusing on getting things compressed? Are there any drawbacks in this approach?<br /><br />Best regards, Andrey Borodin.</p></blockquote>