From: | Craig Ringer <craig(at)2ndquadrant(dot)com> |
---|---|
To: | Chapman Flack <chap(at)anastigmatix(dot)net> |
Cc: | Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: custom compression methods |
Date: | 2015-12-14 07:36:49 |
Message-ID: | CAMsr+YGpnCjtUqmVyskRGP86Me67rXjgWV+i2rkYypnYdvd9vg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 14 December 2015 at 15:27, Chapman Flack <chap(at)anastigmatix(dot)net> wrote:
> On 12/14/15 01:50, Craig Ringer wrote:
>
> > pg_upgrade means you can't just redefine the current toast bits so the
> > compressed bit means "data is compressed, check first byte of varlena
> data
> > for algorithm" because existing data won't have that, the first byte will
> > be the start of the compressed data stream.
>
> Is there any small sequence of initial bytes you wouldn't ever see in PGLZ
> output? Either something invalid, or something obviously nonoptimal
> like run(n,'A')||run(n,'A') where PGLZ would have just output run(2n,'A')?
>
>
I don't think we need to worry, since doing it per-column makes this issue
go away. Per-Datum compression would make it easier to switch methods
(requiring no table rewrite) at the cost of more storage for each varlena,
which probably isn't worth it anyway.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Feng Tian | 2015-12-14 07:38:43 | Fdw cleanup |
Previous Message | Victor Yegorov | 2015-12-14 07:30:35 | Re: Disabling an index temporarily |