From: | John Naylor <john(dot)naylor(at)enterprisedb(dot)com> |
---|---|
To: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com> |
Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>, Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PoC] Improve dead tuple storage for lazy vacuum |
Date: | 2023-01-30 04:31:41 |
Message-ID: | CAFBsxsG0wn9h7JJcMR03fdTmvwUaU0c9ps3_Bro98ndN7ysO=g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Jan 29, 2023 at 9:50 PM Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>
wrote:
>
> On Sat, Jan 28, 2023 at 8:33 PM John Naylor
> <john(dot)naylor(at)enterprisedb(dot)com> wrote:
> > The first implementation should be simple, easy to test/verify, easy to
understand, and easy to replace. As much as possible anyway.
>
> Yes, but if a concurrent writer waits for another process to finish
> the iteration, it ends up waiting on a lwlock, which is not
> interruptible.
>
> >
> > > So the idea is that we set iter_active to true (with the
> > > lock in exclusive mode), and prevent concurrent updates when the flag
> > > is true.
> >
> > ...by throwing elog(ERROR)? I'm not so sure users of this API would
prefer that to waiting.
>
> Right. I think if we want to wait rather than an ERROR, the waiter
> should wait in an interruptible way, for example, a condition
> variable. I did a simpler way in the v22 patch.
>
> ...but looking at dshash.c, dshash_seq_next() seems to return an entry
> while holding a lwlock on the partition. My assumption might be wrong.
Using partitions there makes holding a lock less painful on average, I
imagine, but I don't know the details there.
If we make it clear that the first committed version is not (yet) designed
for high concurrency with mixed read-write workloads, I think waiting (as a
protocol) is fine. If waiting is a problem for some use case, at that point
we should just go all the way and replace the locking entirely. In fact, it
might be good to spell this out in the top-level comment and include a link
to the second ART paper.
> > [thinks some more...] Is there an API-level assumption that hasn't been
spelled out? Would it help to have a parameter for whether the iteration
function wants to reserve the privilege to perform writes? It could take
the appropriate lock at the start, and there could then be multiple
read-only iterators, but only one read/write iterator. Note, I'm just
guessing here, and I don't want to make things more difficult for future
improvements.
>
> Seems a good idea. Given the use case for parallel heap vacuum, it
> would be a good idea to support having multiple read-only writers. The
> iteration of the v22 is read-only, so if we want to support read-write
> iterator, we would need to support a function that modifies the
> current key-value returned by the iteration.
Okay, so updating during iteration is not currently supported. It could in
the future, but I'd say that can also wait for fine-grained concurrency
support. Intermediate-term, we should at least make it straightforward to
support:
1) parallel heap vacuum -> multiple read-only iterators
2) parallel heap pruning -> multiple writers
It may or may not be worth it for someone to actually start either of those
projects, and there are other ways to improve vacuum that may be more
pressing. That said, it seems the tid store with global locking would
certainly work fine for #1 and maybe "not too bad" for #2. #2 can also
mitigate waiting by using larger batching, or the leader process could
"pre-warm" the tid store with zero-values using block numbers from the
visibility map.
--
John Naylor
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2023-01-30 04:57:14 | Re: Assertion failure in SnapBuildInitialSnapshot() |
Previous Message | vignesh C | 2023-01-30 04:16:45 | Re: Deadlock between logrep apply worker and tablesync worker |