Re: Using read_stream in index vacuum

From: Junwang Zhao <zhjwpku(at)gmail(dot)com>
To: "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
Subject: Re: Using read_stream in index vacuum
Date: 2024-10-19 15:41:17
Message-ID: CAEG8a3JB+WG9FKmm6cFJn+psJmoiVFvV-N=WEdo0YFcoUSQc3Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Andrey,

On Sat, Oct 19, 2024 at 5:39 PM Andrey M. Borodin <x4mmm(at)yandex-team(dot)ru> wrote:
>
> Hi hackers!
>
> On a recent hacking workshop [0] Thomas mentioned that patches using new API would be welcomed.
> So I prototyped streamlining of B-tree vacuum for a discussion.
> When cleaning an index we must visit every index tuple, thus we uphold a special invariant:
> After checking a trailing block, it must be last according to subsequent RelationGetNumberOfBlocks(rel) call.
>
> This invariant does not allow us to completely replace block loop with streamlining. That's why streamlining is done only for number of blocks returned by first RelationGetNumberOfBlocks(rel) call. A tail is processed with regular ReadBufferExtended().

I'm wondering why is the case, ISTM that we can do *p.current_blocknum
= scanblkno*
and *p.last_exclusive = num_pages* in each loop of the outer for?

+ /* We only streamline number of blocks that are know at the beginning */
know -> known

+ * However, we do not depent on it much, and in future ths
+ * expetation might change.

depent -> depend
ths -> this
expetation -> expectation

>
> Also, it's worth mentioning that we have to jump to the left blocks from a recently split pages. We also do it with regular ReadBufferExtended(). That's why signature btvacuumpage() now accepts a buffer, not a block number.
>
>
> I've benchmarked the patch on my laptop (MacBook Air M3) with following workload:
> 1. Initialization
> create unlogged table x as select random() r from generate_series(1,1e7);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> create index on x(r);
> vacuum;
> 2. pgbench with 1 client
> insert into x select random() from generate_series(0,10) x;
> vacuum x;
>
> On my laptop I see ~3% increase in TPS of the the pgbench (~ from 101 to 104), but statistical noise is very significant, bigger than performance change. Perhaps, a less noisy benchmark can be devised.
>
> What do you think? If this approach seems worthwhile, I can adapt same technology to other AMs.
>

I think this is a use case where the read stream api fits very well, thanks.

>
> Best regards, Andrey Borodin.
>
> [0] https://rhaas.blogspot.com/2024/08/postgresql-hacking-workshop-september.html
>

--
Regards
Junwang Zhao

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2024-10-19 17:13:18 Re: Wrong security context for deferred triggers?
Previous Message Joel Jacobson 2024-10-19 15:32:49 Re: New "raw" COPY format