From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Melanie Plageman <melanieplageman(at)gmail(dot)com> |
Cc: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com> |
Subject: | Re: why do hash index builds use smgrextend() for new splitpoint pages |
Date: | 2022-02-25 03:31:15 |
Message-ID: | CAA4eK1Lp1Zvt+L3hjqeUS4se259Fse32w=SRxzHH=ExwVB7ErA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Feb 25, 2022 at 8:54 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Fri, Feb 25, 2022 at 4:41 AM Melanie Plageman
> <melanieplageman(at)gmail(dot)com> wrote:
> >
> > I'm trying to understand why hash indexes are built primarily in shared
> > buffers except when allocating a new splitpoint's worth of bucket pages
> > -- which is done with smgrextend() directly in _hash_alloc_buckets().
> >
> > Is this just so that the value returned by smgrnblocks() includes the
> > new splitpoint's worth of bucket pages?
> >
> > All writes of tuple data to pages in this new splitpoint will go
> > through shared buffers (via hash_getnewbuf()).
> >
> > I asked this and got some thoughts from Robert in [1], but I still don't
> > really get it.
> >
> > When a new page is needed during the hash index build, why can't
> > _hash_expandtable() just call ReadBufferExtended() with P_NEW instead of
> > _hash_getnewbuf()? Does it have to do with the BUCKET_TO_BLKNO mapping?
> >
>
> We allocate the chunk of pages (power-of-2 groups) at the time of
> split which allows them to appear consecutively in an index.
>
I think allocating chunks of pages via "ReadBufferExtended() with
P_NEW" will be time-consuming as compared to what we do now.
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2022-02-25 03:34:27 | Re: Buffer Manager and Contention |
Previous Message | Kyotaro Horiguchi | 2022-02-25 03:29:16 | Re: Buffer Manager and Contention |