From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Frédéric Yhuel <frederic(dot)yhuel(at)dalibo(dot)com> |
Cc: | Nathan Bossart <nathandbossart(at)gmail(dot)com>, wenhui qiu <qiuwenhuifx(at)gmail(dot)com>, "Imseih (AWS), Sami" <simseih(at)amazon(dot)com>, David Rowley <dgrowleyml(at)gmail(dot)com>, Joe Conway <mail(at)joeconway(dot)com>, Michael Banck <mbanck(at)gmx(dot)net>, Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>, Melanie Plageman <melanieplageman(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: New GUC autovacuum_max_threshold ? |
Date: | 2025-01-14 19:30:13 |
Message-ID: | CA+Tgmoa_Ra_a6d3m2Z=_urwjeKzu9vzewLMuFu+-Cxd3dK15vA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Nov 13, 2024 at 5:03 AM Frédéric Yhuel
<frederic(dot)yhuel(at)dalibo(dot)com> wrote:
> Let's compare the current situation to the situation post-Nathan's-patch
> with a cap of 100M. Consider a table 100 times larger than the one of
> Robert's previous example, so pgbench scale factor 2_560_000, size on
> disk 32TB.
This is a great thought experiment.
> Currently, that table will be vacuumed for bloat when the number of
> dead tuples exceeds 20% of the table size, because that's the default
> value of autovacuum_vacuum_scale_factor. The table has 256 billion
> tuples, so that means that we're going to vacuum it when there are
> more than 51 billion dead tuples.
Are we, though? In previous releases, maintenance_work_mem was capped
at 1GB, and it took 6 bytes per dead TID, so we were limited to 1/6 of
a billion dead tuples per indexvac cycle. So, if we really were
vacuuming only every 51 billion dead tuples, we would be doing about
300 indexvac cycles per vacuum. I think somebody in this situation
would likely have needed to adjust the settings or things would just
stop working, long before they got to this point. It's not impossible
that somebody out there has a low-criticality, largely-unmonitored
system that is like this, but I've never seen anything like it.
> Post-patch, we will vacuum when we
> have 100 million dead tuples. Suppose a uniform workload that slowly
> updates rows in the table. If we were previously autovacuuming the
> table once per day (1440 minutes) we're now going to try to vacuum it
> almost every minute (1440 minutes / 512 = 168 seconds).
>
> (compare with every 55 min with my formula)
If we were vacuuming the table one per day, and it had 51 billion dead
tuples each time, that would mean we were generating ~591000 dead
tuples per second during that day. I'm not sure that's physically
possible with PostgreSQL on any hardware. I am almost positive that
you couldn't get by with vacuuming once a day if you were. I actually
think there's no amount of vacuuming that can turn this into a
success, at least on old releases with the 1GB limit, and maybe even
now. In 168 seconds you would have generated almost 100 million dead
tuples, which is already closing in on the 1GB autovacuum_work_mem
limit, so you probably need to vacuum at least that often to avoid
having to do multiple indexvac passes, but you also probably can't
finish vacuuming the table in 168 seconds, so you're probably just
going to get runaway bloat no matter what you do. The new dead TID
store should help, but I suspect if you are generating dead tuples
this fast on a table this large you are in a lot of trouble even on
the latest release.
> Of course, this a theoretical example that is probably unrealistic. I
> don't know, really. I don't know if Robert's example was realistic in
> the first place.
I intended it to be realistic, but I might not have entirely
succeeded. Even if it's a bit off, I think it's far closer to being
realistic as I presented it than in your hundred-times-larger
scenario, which makes me think that the higher cap is more sensible
than the one you originally proposed. I don't think that the argument
I made can be scaled up or down by an arbitrary multiple without
becoming silly. The practical limits here have to do with the
capabilities of the hardware that is possible to buy, and they'll need
to be adjusted if, say, disks get ten times bigger and a hundred times
faster and memory becomes cheap as water.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2025-01-14 19:38:27 | Re: Reduce TupleHashEntryData struct size by half |
Previous Message | Alena Rybakina | 2025-01-14 19:29:59 | Re: pgsql: Consolidate docs for vacuum-related GUCs in new subsection |