From: | Noah Misch <noah(at)leadboat(dot)com> |
---|---|
To: | John Naylor <john(dot)naylor(at)enterprisedb(dot)com> |
Cc: | Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>, Floris Van Nee <florisvannee(at)optiver(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: non-HOT update not looking at FSM for large tuple update |
Date: | 2021-03-27 07:00:31 |
Message-ID: | 20210327070031.GA4140566@rfd.leadboat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I gather this is important when large_upd_rate=rate(cross-page update bytes
for tuples larger than fillfactor) exceeds small_ins_rate=rate(insert bytes
for tuples NOT larger than fillfactor). That is a plausible outcome when
inserts are rare, and table bloat then accrues at
large_upd_rate-small_ins_rate. I agree this patch improves behavior.
Does anyone have a strong opinion on whether to back-patch? I am weakly
inclined not to back-patch, because today's behavior might happen to perform
better when large_upd_rate-small_ins_rate<0. Besides the usual choices of
back-patching or not back-patching, we could back-patch with a stricter
threshold. Suppose we accepted pages for larger-than-fillfactor tuples when
the pages have at least
BLCKSZ-SizeOfPageHeaderData-sizeof(ItemIdData)-MAXALIGN(MAXALIGN(SizeofHeapTupleHeader)+1)+1
bytes of free space. That wouldn't reuse a page containing a one-column
tuple, but it would reuse a page having up to eight line pointers.
On Fri, Mar 19, 2021 at 02:16:22PM -0400, John Naylor wrote:
> --- a/src/backend/access/heap/hio.c
> +++ b/src/backend/access/heap/hio.c
> @@ -335,11 +335,24 @@ RelationGetBufferForTuple(Relation relation, Size len,
> + const Size maxPaddedFsmRequest = MaxHeapTupleSize -
> + (MaxHeapTuplesPerPage / 8 * sizeof(ItemIdData));
In evaluating whether this is a good choice of value, I think about the
expected page lifecycle. A tuple barely larger than fillfactor (roughly
len=1+BLCKSZ*fillfactor/100) will start on a roughly-empty page. As long as
the tuple exists, the server will skip that page for inserts. Updates can
cause up to floor(99/fillfactor) same-size versions of the tuple to occupy the
page simultaneously, creating that many line pointers. At the fillfactor=10
minimum, it's good to accept otherwise-empty pages having at least nine line
pointers, so a page can restart the aforementioned lifecycle. Tolerating even
more line pointers helps when updates reduce tuple size or when the page was
used for smaller tuples before it last emptied. At the BLCKSZ=8192 default,
this maxPaddedFsmRequest allows 36 line pointers (good or somewhat high). At
the BLCKSZ=1024 minimum, it allows 4 line pointers (low). At the BLCKSZ=32768
maximum, 146 (likely excessive). I'm not concerned about optimizing
non-default block sizes, so let's keep your proposal.
Comments and the maxPaddedFsmRequest variable name use term "fsm" for things
not specific to the FSM. For example, the patch's test case doesn't use the
FSM. (That is fine. Ordinarily, RelationGetTargetBlock() furnishes its
block. Under CLOBBER_CACHE_ALWAYS, the "try the last page" logic does so. An
FSM-using test would contain a VACUUM.) I plan to commit the attached
version; compared to v5, it updates comments and renames this variable.
Thanks,
nm
Attachment | Content-Type | Size |
---|---|---|
fillfactor-insert-large-v6.patch | text/plain | 7.8 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Trafalgar Ricardo Lu | 2021-03-27 07:36:25 | [GSoC] Question about Add functionality to pg_top and supporting tools |
Previous Message | Amit Kapila | 2021-03-27 06:37:36 | Re: [PATCH] add concurrent_abort callback for output plugin |