From: | Nathan Bossart <nathandbossart(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | davinder singh <davindersingh2692(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Optimize external TOAST storage |
Date: | 2022-03-16 18:46:37 |
Message-ID: | 20220316184637.GC1137410@nathanxps13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 16, 2022 at 11:36:56AM -0700, Nathan Bossart wrote:
> Thinking further, is simply reducing the number of TOAST chunks the right
> thing to look at? If I want to add a TOAST attribute that requires 100,000
> chunks, and you told me that I could save 10% in the read path for an extra
> 250 chunks of disk space, I would probably choose read performance. If I
> wanted to add 100,000 attributes that were each 3 chunks, and you told me
> that I could save 10% in the read path for an extra 75,000 chunks of disk
> space, I might choose the extra disk space. These are admittedly extreme
> (and maybe even impossible) examples, but my point is that the amount of
> disk space you are willing to give up may be related to the size of the
> attribute. And maybe one way to extract additional read performance with
> this optimization is to use a variable threshold so that we are more likely
> to use it for large attributes.
I might be overthinking this. Maybe it is enough to skip compressing the
attribute whenever compression saves no more than some percentage of the
uncompressed attribute size. A conservative default setting might be
something like 5% or 10%.
--
Nathan Bossart
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-03-16 18:47:24 | Re: Granting SET and ALTER SYSTE privileges for GUCs |
Previous Message | Nathan Bossart | 2022-03-16 18:42:48 | Re: add checkpoint stats of snapshot and mapping files of pg_logical dir |