From: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Melanie Plageman <melanieplageman(at)gmail(dot)com>, John Naylor <johncnaylorls(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel heap vacuum |
Date: | 2025-03-13 07:19:53 |
Message-ID: | CAD21AoA8+=9s3qEF-iTpr_WxjTjdvMOU5t3Rc_XkOpcX1L8gNA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 12, 2025 at 3:05 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>
> On Wed, Mar 12, 2025 at 3:12 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> >
> > On Tue, Mar 11, 2025 at 6:00 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > >
> > > On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> > > >
> > > > On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > > > >
> > > > >
> > > > > > However, in the heap vacuum phase, the leader process needed
> > > > > > to process all blocks, resulting in soft page faults while creating
> > > > > > Page Table Entries (PTEs). Without the patch, the backend process had
> > > > > > already created PTEs during the heap scan, thus preventing these
> > > > > > faults from occurring during the heap vacuum phase.
> > > > > >
> > > > >
> > > > > This part is again not clear to me because I am assuming all the data
> > > > > exists in shared buffers before the vacuum, so why the page faults
> > > > > will occur in the first place.
> > > >
> > > > IIUC PTEs are process-local data. So even if physical pages are loaded
> > > > to PostgreSQL's shared buffer (and paga caches), soft page faults (or
> > > > minor page faults)[1] can occur if these pages are not yet mapped in
> > > > its page table.
> > > >
> > >
> > > Okay, I got your point. BTW, I noticed that even for the case where
> > > all the data is in shared_buffers, the performance improvement for
> > > workers greater than two does decrease marginally. Am I reading the
> > > data correctly? If so, what is the theory, and do we have
> > > recommendations for a parallel degree?
> >
> > The decrease you referred to is that the total vacuum execution time?
> >
>
> Right.
>
> > When it comes to the execution time of phase 1, it seems we have good
> > scalability. For example, with 2 workers (i.e.3 workers working
> > including the leader in total) it got about 3x speed up, and with 4
> > workers it got about 5x speed up. Regarding other phases, the phase 3
> > got slower probably because of PTEs stuff, but I don't investigate why
> > the phase 2 also slightly got slower with more than 2 workers.
> >
>
> Could it be possible that now phase-2 needs to access the shared area
> for TIDs, and some locking/unlocking causes such slowdown?
No, TidStore is shared in this case but we don't take a lock on it
during phase 2.
Regards,
--
Masahiko Sawada
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Oliver Ford | 2025-03-13 07:49:03 | Re: Add RESPECT/IGNORE NULLS and FROM FIRST/LAST options |
Previous Message | Ashutosh Sharma | 2025-03-13 06:58:07 | Re: Orphaned users in PG16 and above can only be managed by Superusers |