From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
Cc: | Melanie Plageman <melanieplageman(at)gmail(dot)com>, John Naylor <johncnaylorls(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, "Hayato Kuroda (Fujitsu)" <kuroda(dot)hayato(at)fujitsu(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel heap vacuum |
Date: | 2025-03-12 10:05:24 |
Message-ID: | CAA4eK1JSmi3YQzoVXEDaB+5UZbYPSFk9ffj=3hQSn29d-OxFDg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 12, 2025 at 3:12 AM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
>
> On Tue, Mar 11, 2025 at 6:00 AM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > On Mon, Mar 10, 2025 at 11:57 PM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> wrote:
> > >
> > > On Sun, Mar 9, 2025 at 11:12 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> > > >
> > > >
> > > > > However, in the heap vacuum phase, the leader process needed
> > > > > to process all blocks, resulting in soft page faults while creating
> > > > > Page Table Entries (PTEs). Without the patch, the backend process had
> > > > > already created PTEs during the heap scan, thus preventing these
> > > > > faults from occurring during the heap vacuum phase.
> > > > >
> > > >
> > > > This part is again not clear to me because I am assuming all the data
> > > > exists in shared buffers before the vacuum, so why the page faults
> > > > will occur in the first place.
> > >
> > > IIUC PTEs are process-local data. So even if physical pages are loaded
> > > to PostgreSQL's shared buffer (and paga caches), soft page faults (or
> > > minor page faults)[1] can occur if these pages are not yet mapped in
> > > its page table.
> > >
> >
> > Okay, I got your point. BTW, I noticed that even for the case where
> > all the data is in shared_buffers, the performance improvement for
> > workers greater than two does decrease marginally. Am I reading the
> > data correctly? If so, what is the theory, and do we have
> > recommendations for a parallel degree?
>
> The decrease you referred to is that the total vacuum execution time?
>
Right.
> When it comes to the execution time of phase 1, it seems we have good
> scalability. For example, with 2 workers (i.e.3 workers working
> including the leader in total) it got about 3x speed up, and with 4
> workers it got about 5x speed up. Regarding other phases, the phase 3
> got slower probably because of PTEs stuff, but I don't investigate why
> the phase 2 also slightly got slower with more than 2 workers.
>
Could it be possible that now phase-2 needs to access the shared area
for TIDs, and some locking/unlocking causes such slowdown?
--
With Regards,
Amit Kapila.
From | Date | Subject | |
---|---|---|---|
Next Message | Ashutosh Bapat | 2025-03-12 10:08:21 | Re: Support NOT VALID / VALIDATE constraint options for named NOT NULL constraints |
Previous Message | Rushabh Lathia | 2025-03-12 10:04:12 | Re: Support NOT VALID / VALIDATE constraint options for named NOT NULL constraints |