Re: Improve eviction algorithm in ReorderBuffer

From: vignesh C <vignesh21(at)gmail(dot)com>
To: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
Cc: Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>, Shubham Khanna <khannashubham1197(at)gmail(dot)com>, Dilip Kumar <dilipbalaut(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Improve eviction algorithm in ReorderBuffer
Date: 2024-03-05 03:20:38
Message-ID: CALDaNm0-fSZdo=fVpSBfO+4_c+8HGuzWxZFZhpRPhjpPc5pifA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, 28 Feb 2024 at 11:40, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Mon, Feb 26, 2024 at 7:54 PM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> >
>
> A few comments on 0003:
> ===================
> 1.
> +/*
> + * Threshold of the total number of top-level and sub transactions
> that controls
> + * whether we switch the memory track state. While the MAINTAIN_HEAP state is
> + * effective when there are many transactions being decoded, in many systems
> + * there is generally no need to use it as long as all transactions
> being decoded
> + * are top-level transactions. Therefore, we use MaxConnections as
> the threshold
> + * so we can prevent switch to the state unless we use subtransactions.
> + */
> +#define REORDER_BUFFER_MEM_TRACK_THRESHOLD MaxConnections
>
> The comment seems to imply that MAINTAIN_HEAP is useful for large
> number of transactions but ReorderBufferLargestTXN() switches to this
> state even when there is one transaction. So, basically we use the
> binary_heap technique to get the largest even when we have one
> transaction but we don't maintain that heap unless we have
> REORDER_BUFFER_MEM_TRACK_THRESHOLD number of transactions are
> in-progress. This means there is some additional work when (build and
> reset heap each time when we pick largest xact) we have fewer
> transactions in the system but that may not be impacting us because of
> other costs involved like serializing all the changes. I think once we
> can try to stress test this by setting
> debug_logical_replication_streaming to 'immediate' to see if the new
> mechanism has any overhead.

I ran the test with a transaction having many inserts:

| 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000
------- |-----------|------------|------------|--------------|----------------|----------------
Head | 26.31 | 48.84 | 93.65 | 480.05 | 4808.29 | 47020.16
Patch | 26.35 | 50.8 | 97.99 | 484.8 | 4856.95 | 48108.89

The same test with debug_logical_replication_streaming= 'immediate'

| 5000 | 10000 | 20000 | 100000 | 1000000 | 10000000
------- |-----------|------------|------------|--------------|----------------|----------------
Head | 59.29 | 115.84 | 227.21 | 1156.08 | 11367.42 | 113986.14
Patch | 62.45 | 120.48 | 240.56 | 1185.12 | 11855.37 | 119921.81

The execution time is in milliseconds. The column header indicates the
number of inserts in the transaction.
In this case I noticed that the test execution with patch was taking
slightly more time.

Regards,
Vignesh

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Amit Kapila 2024-03-05 03:24:50 Re: Synchronizing slots from primary to standby
Previous Message wenhui qiu 2024-03-05 03:05:46 Re: Support "Right Semi Join" plan shapes