From: | John Naylor <john(dot)naylor(at)2ndquadrant(dot)com> |
---|---|
To: | Binguo Bao <djydewang(at)gmail(dot)com> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Atri Sharma <atri(dot)jiit(at)gmail(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Владимир Лесков <vladimirlesk(at)yandex-team(dot)ru>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [proposal] de-TOAST'ing using a iterator |
Date: | 2019-07-18 03:39:48 |
Message-ID: | CACPNZCv7FR4ioHNjKbRP4UeR+JEFkATXo8Tf6k-6jU+-WbJkTg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jul 16, 2019 at 9:14 PM Binguo Bao <djydewang(at)gmail(dot)com> wrote:
> In the compressed beginning case, the test result is different from yours since the patch is ~1.75x faster
> rather than no improvement. The interesting thing is that the patch if 4% faster than master in the uncompressed end case.
> I can't figure out reason now.
Probably some differences in our test environments. I wouldn't worry
about it too much, since we can show improvement in more realistic
tests.
>> I'm not an expert on TOAST, but maybe one way to solve both problems
>> is to work at the level of whole TOAST chunks. In that case, the
>> current patch would look like this:
>> 1. The caller requests more of the attribute value from the de-TOAST iterator.
>> 2. The iterator gets the next chunk and either copies or decompresses
>> the whole chunk into the buffer. (If inline, just decompress the whole
>> thing)
>
>
> Thanks for your suggestion. It is indeed possible to implement PG_DETOAST_DATUM_SLICE using the de-TOAST iterator.
> IMO the iterator is more suitable for situations where the caller doesn't know the slice size. If the caller knows the slice size,
> it is reasonable to fetch enough chunks at once and then decompress it at once.
That sounds reasonable for the reason of less overhead.
In the case where we don't know the slice size, how about the other
aspect of my question above: Might it be simpler and less overhead to
decompress entire chunks at a time? If so, I think it would be
enlightening to compare performance.
--
John Naylor https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Asim R P | 2019-07-18 04:08:13 | Re: ERROR after writing PREPARE WAL record |
Previous Message | Ryan Lambert | 2019-07-18 03:26:11 | Re: dropdb --force |