From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Craig Ringer <craig(at)2ndquadrant(dot)com> |
Cc: | Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: custom compression methods |
Date: | 2015-12-14 18:50:30 |
Message-ID: | 20151214185030.GA32034@awork2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2015-12-14 14:50:57 +0800, Craig Ringer wrote:
> http://www.postgresql.org/message-id/flat/20130615102028(dot)GK19500(at)alap2(dot)anarazel(dot)de#20130615102028(dot)GK19500@alap2.anarazel.de
> The issue with per-Datum is that TOAST claims two bits of a varlena header,
> which already limits us to 1 GiB varlena values, something people are
> starting to find to be a problem. There's no wiggle room to steal more
> bits. If you want pluggable compression you need a way to store knowledge
> of how a given datum is compressed with the datum or have a fast, efficient
> way to check.
>
> pg_upgrade means you can't just redefine the current toast bits so the
> compressed bit means "data is compressed, check first byte of varlena data
> for algorithm" because existing data won't have that, the first byte will
> be the start of the compressed data stream.
I don't think there's an actual problem here. My old patch that you
referenced solves this.
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2015-12-14 19:48:14 | Re: Uninterruptible slow geo_ops.c |
Previous Message | Jim Nasby | 2015-12-14 18:28:58 | Re: Proposal: custom compression methods |