From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Nikhil Kumar Veldanda <veldanda(dot)nikhilkumar17(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: ZStandard (with dictionaries) compression support for TOAST compression |
Date: | 2025-03-06 19:33:30 |
Message-ID: | 720003.1741289610@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Thu, Mar 6, 2025 at 12:43 AM Nikhil Kumar Veldanda
> <veldanda(dot)nikhilkumar17(at)gmail(dot)com> wrote:
>> Notably, this is the first compression algorithm for Postgres that can make use of a dictionary to provide higher levels of compression, but dictionaries have to be generated and maintained,
> I think that solving the problems around using a dictionary is going
> to be really hard. Can we see some evidence that the results will be
> worth it?
BTW, this is hardly the first such attempt. See [1] for a prior
attempt at something fairly similar, which ended up going nowhere.
It'd be wise to understand why that failed before pressing forward.
Note that the thread title for [1] is pretty misleading, as the
original discussion about JSONB-specific compression soon migrated
to discussion of compressing TOAST data using dictionaries. At
least from a ten-thousand-foot viewpoint, that seems like exactly
what you're proposing here. I see that you dismissed [1] as
irrelevant upthread, but I think you'd better look closer.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2025-03-06 19:34:41 | Re: Statistics Import and Export |
Previous Message | Nathan Bossart | 2025-03-06 19:33:13 | Re: Statistics Import and Export |