From: | "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com> |
---|---|
To: | 'Masahiko Sawada' <sawada(dot)mshk(at)gmail(dot)com>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Shlok Kyal <shlok(dot)kyal(dot)oss(at)gmail(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | RE: Using per-transaction memory contexts for storing decoded tuples |
Date: | 2024-10-03 04:42:05 |
Message-ID: | TYAPR01MB5692177C9AA8A7433654009BF5712@TYAPR01MB5692.jpnprd01.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Dear Sawada-san, Amit,
> > So, decoding a large transaction with many smaller allocations can
> > have ~2.2% overhead with a smaller block size (say 8Kb vs 8MB). In
> > real workloads, we will have fewer such large transactions or a mix of
> > small and large transactions. That will make the overhead much less
> > visible. Does this mean that we should invent some strategy to defrag
> > the memory at some point during decoding or use any other technique? I
> > don't find this overhead above the threshold to invent something
> > fancy. What do others think?
>
> I agree that the overhead will be much less visible in real workloads.
> +1 to use a smaller block (i.e. 8kB). It's easy to backpatch to old
> branches (if we agree) and to revert the change in case something
> happens.
I also felt okay. Just to confirm - you do not push rb_mem_block_size patch and
just replace SLAB_LARGE_BLOCK_SIZE -> SLAB_DEFAULT_BLOCK_SIZE, right? It seems that
only reorderbuffer.c uses the LARGE macro so that it can be removed.
Best regards,
Hayato Kuroda
FUJITSU LIMITED
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2024-10-03 04:45:43 | Re: Return pg_control from pg_backup_stop(). |
Previous Message | Laurenz Albe | 2024-10-03 04:31:50 | Re: On disable_cost |