From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, Dimitri Fontaine <dimitri(at)2ndquadrant(dot)fr>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Compression of full-page-writes |
Date: | 2013-10-11 17:06:55 |
Message-ID: | 20131011170655.GB4056218@alap2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2013-10-11 09:22:50 +0530, Amit Kapila wrote:
> I think it will be difficult to prove by using any compression
> algorithm, that it compresses in most of the scenario's.
> In many cases it can so happen that the WAL will also not be reduced
> and tps can also come down if the data is non-compressible, because
> any compression algorithm will have to try to compress the data and it
> will burn some cpu for that, which inturn will reduce tps.
Then those concepts maybe aren't such a good idea after all. Storing
lots of compressible data in an uncompressed fashion isn't an all that
common usecase. I most certainly don't want postgres to optimize for
blank padded data, especially if it can hurt other scenarios. Just not
enough benefit.
That said, I actually have relatively high hopes for compressing full
page writes. There often enough is lot of repetitiveness between rows on
the same page that it should be useful outside of such strange
scenarios. But maybe pglz is just not a good fit for this, it really
isn't a very good algorithm in this day and aage.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Magnus Hagander | 2013-10-11 17:32:45 | Re: Auto-tuning work_mem and maintenance_work_mem |
Previous Message | Andres Freund | 2013-10-11 17:02:56 | Re: INSERT...ON DUPLICATE KEY LOCK FOR UPDATE |