Re: Add LSN <-> time conversion functionality

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Melanie Plageman <melanieplageman(at)gmail(dot)com>
Cc: Tomas Vondra <tomas(at)vondra(dot)me>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru>, Daniel Gustafsson <daniel(at)yesql(dot)se>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Andres Freund <andres(at)anarazel(dot)de>, Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>, Ilya Kosmodemiansky <hydrobiont(at)gmail(dot)com>
Subject: Re: Add LSN <-> time conversion functionality
Date: 2024-08-13 18:29:52
Message-ID: CA+TgmoaYzGq8T30nYYujpWVRCTUypMsr6TDFAxB9aNTxe0o-4g@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Fri, Aug 9, 2024 at 11:48 AM Melanie Plageman
<melanieplageman(at)gmail(dot)com> wrote:
> In the adaptive freezing code, I use the time stream to answer a yes
> or no question. I translate a time in the past (now -
> target_freeze_duration) to an LSN so that I can determine if a page
> that is being modified for the first time after having been frozen has
> been modified sooner than target_freeze_duration (a GUC value). If it
> is, that page was unfrozen too soon. So, my use case is to produce a
> yes or no answer. It doesn't matter very much how accurate I am if I
> am wrong. I count the page as having been unfrozen too soon or I
> don't. So, it seems I care about the accuracy of data from now until
> now - target_freeze_duration + margin of error a lot and data before
> that not at all. While it is true that if I'm wrong about a page that
> was older but near the cutoff, that might be better than being wrong
> about a very recent page, it is still wrong.

I don't really think this is the right way to think about it.

First, you'd really like target_freeze_duration to be something that
can be changed at runtime, but the data structure that you use for the
LSN-time mapping has to be sized at startup time and can't change
thereafter. So I think you should try to design the LSN-time mapping
structure so that it is fixed size -- i.e. independent of the value of
target_freeze_duration -- but capable of producing sufficiently
correct answers for all reasonable values of target_freeze_duration.
Then the user can change the value to whatever they like without a
restart, and still get reasonable behavior. Meanwhile, you don't have
to deal with a variable-size data structure. Woohoo!

Second, I guess I'm a bit confused about the statement that "It
doesn't matter very much how accurate I am if I am wrong." What does
that really mean? We're going to look at the LSN of a page that we're
thinking about freezing and use that to estimate the time since the
page was last modified and use that to guess whether the page is
likely to be modified again soon and then use that to decide whether
to freeze. Even if we always estimated the time since last
modification perfectly, we could still be wrong about what that means
for the future. And we won't estimate the last modification time
perfectly in all cases, because even if we make perfect decisions
about which data points to throw away, we're still going to use linear
interpolation in between those points, and that can be wrong. And I
think it's pretty much impossible to make flawless decisions about
which points to throw away, too.

But the point is that we just need to be close enough. If
target_freeze_duration=10m and our page age estimates are off by an
average of 10s, we will still make the correct decision about whether
to freeze most of the time, but if they are off by an average of 1m,
we'll be wrong more often, and if they're off by an average of 10m,
we'll be wrong way more often. When target_freeze_duration=2h, it's
not nearly so bad to be off by 10m. The probability that a page will
be modified again soon when it hasn't been modified in the last 1h54m
is probably not that different from the probability when it hasn't
been modified in 2h4m, but the probability of a page being modified
again soon when it hasn't been modified in the last 4m could well be
quite different from when it hasn't been modified in the last 14m. So
it's completely reasonable, IMHO, to set things up so that you have
higher accuracy for newer LSNs.

I feel like you're making this a lot harder than it needs to be.
Actually, I think this is a hard problem in terms of where to actually
store the data -- as Tomas said, pgstat doesn't seem quite right, and
it's not clear to me what is actually right. But in terms of actually
what to do with the data structure, some kind of exponential thinning
of the data seems like the obvious thing to do. Tomas suggested a
version of that and I suggested a version of that and you could pick
either one or do something of your own, but I don't really feel like
we need or want an original algorithm here. It seems better to just do
stuff we know works, and whose characteristics we can easily predict.
The only area where I feel like we might want some algorithmic
innovation is in terms of eliding redundant measurements when things
aren't really changing.

But even that seems pretty optional. If you don't do that, and the
system sits there idle for a long time, you will have a needlessly
inaccurate idea of how old the pages are compared to what you could
have had. But also, they will all still look old so you'll still
freeze them so you win. The end.

--
Robert Haas
EDB: http://www.enterprisedb.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Etsuro Fujita 2024-08-13 18:35:33 Re: Cross-version Compatibility of postgres_fdw
Previous Message Etsuro Fujita 2024-08-13 18:25:48 Re: Cross-version Compatibility of postgres_fdw