From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Jan Wieck <jan(at)wi3ck(dot)info>, gregsmithpgsql(at)gmail(dot)com, Robert Haas <robertmhaas(at)gmail(dot)com>, John Naylor <john(dot)naylor(at)enterprisedb(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi> |
Subject: | Re: The Free Space Map: Problems and Opportunities |
Date: | 2021-08-17 00:42:11 |
Message-ID: | CAH2-Wz=T8_AzZGLidw81AvqwX1LWqUZByoTLZktxc-fVSr5J-Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Aug 16, 2021 at 5:15 PM Peter Geoghegan <pg(at)bowt(dot)ie> wrote:
> Another concern is knock on effects *after* any initial problematic
> updates -- that's certainly not where it ends. Every action has
> consequences, and these consequences themselves have consequences. By
> migrating a row from an earlier block into a later block (both
> physically and temporally early/late), we're not just changing things
> for that original order row or rows (the order non-HOT-updated by the
> delivery transaction).
To be clear, TPC-C looks like this: each order row and each order line
row will be inserted just once, and then later updated just once. But
that's it, forever -- no more modifications. Both tables grow and grow
for as long as the benchmark runs. Users with workloads like this will
naturally expect that performance will steady over time. Even over
days or even weeks. But that's not what we see.
What we actually see is that the FSM can never quite resist the
temptation to dirty older pages that should be aging out. And so
performance degrades over days and weeks -- that's how long Jan has
said that it can take.
The FSM does have a bias in favor of using earlier pages in favor of
later pages, in order to make relation truncation by VACUUM more
likely in general. That bias certainly isn't helping us here, and
might be another thing that hurts us. I think that the fundamental
problem is that the FSM just doesn't have any way of recognizing that
it's behavior is penny wise, pound foolish. I don't believe that there
is any fundamental reason why Postgres can't have *stable* long term
performance for this workload in the way that it's really expected to.
That seems possible within the confines of the existing design for HOT
and VACUUM, which already work very well for the first few hours.
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Smith | 2021-08-17 01:09:54 | Re: Added schema level support for publication. |
Previous Message | Peter Geoghegan | 2021-08-17 00:15:36 | Re: The Free Space Map: Problems and Opportunities |