From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Removing more vacuumlazy.c special cases, relfrozenxid optimizations |
Date: | 2022-02-04 22:26:46 |
Message-ID: | CAH2-WznQUHKVPX0ApUnhvKX22ucTbqBUG9erk4tvSwDc2rUvMw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Feb 4, 2022 at 4:18 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Fri, Feb 4, 2022 at 3:31 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> > Application B will already block pruning by VACUUM operations against
> > application A's table, and so effectively blocks recording of the
> > resultant free space in the FSM in your scenario. And so application A
> > and application B should be considered the same application already.
> > That's just how VACUUM works.
>
> Sure ... but that also sucks. If we consider application A and
> application B to be the same application, then we're basing our
> decision about what to do on information that is inaccurate.
I agree that it sucks, but I don't think that it's particularly
relevant to the FSM prototype patch that I included with v7 of the
patch series. A heap page cannot be considered "closed" (either in the
specific sense from the patch, or in any informal sense) when it has
recently dead tuples.
At some point we should invent a fallback path for pruning, that
migrates recently dead tuples to some other subsidiary structure,
retaining only forwarding information in the heap page. But even that
won't change what I just said about closed pages (it'll just make it
easier to return and fix things up later on).
> I don't see it that way. There are cases where avoiding writes is
> better, and cases where trying to cram everything into the fewest
> possible ages is better. With the right test case you can make either
> strategy look superior.
The cost of reads is effectively much lower than writes with modern
SSDs, in TCO terms. Plus when a FSM strategy like the one from the
patch does badly according to a naive measure such as total table
size, that in itself doesn't mean that we do worse with reads. In
fact, it's quite the opposite.
The benchmark showed that v7 of the patch did very slightly worse on
overall space utilization, but far, far better on reads. In fact, the
benefits for reads were far in excess of any efficiency gains for
writes/with WAL. The greatest bottleneck is almost always latency on
modern hardware [1]. It follows that keeping logically related data
grouped together is crucial. Far more important than potentially using
very slightly more space.
The story I wanted to tell with the FSM patch was about open and
closed pages being the right long term direction. More generally, we
should emphasize managing page-level costs, and deemphasize managing
tuple-level costs, which are much less meaningful.
> What I think your test case has going for it
> is that it is similar to something that a lot of people, really a ton
> of people, actually do with PostgreSQL. However, it's not going to be
> an accurate model of what everybody does, and therein lies some
> element of danger.
No question -- agreed.
> > Then what could you have confidence in?
>
> Real-world experience. Which is hard to get if we don't ever commit
> any patches, but a good argument for (a) having them tested by
> multiple different hackers who invent test cases independently and (b)
> some configurability where we can reasonably include it, so that if
> anyone does experience problems they have an escape.
I agree.
[1] https://dl.acm.org/doi/10.1145/1022594.1022596
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Nathan Bossart | 2022-02-04 22:50:57 | Re: pg_walfile_name uses XLByteToPrevSeg |
Previous Message | Tom Lane | 2022-02-04 22:12:43 | Re: support for CREATE MODULE |