From: | Hannu Krosing <hannuk(at)google(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | Magnus Hagander <magnus(at)hagander(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: We should stop telling users to "vacuum that database in single-user mode" |
Date: | 2021-03-03 12:10:31 |
Message-ID: | CAMT0RQQoNKVkrJT0XsgU3Y=SQNnH=vSdGr1q3tBJ1LVB=hLORg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Mar 3, 2021 at 11:33 AM David Rowley <dgrowleyml(at)gmail(dot)com> wrote:
>
> On Wed, 3 Mar 2021 at 21:44, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
...
> > I think we misunderstand each other. I meant this only as a comment
> > about the idea of ignoring the cost limit in single user mode -- that
> > is, it's a reason to *want* vacuum to not run as quickly as possible
> > in single user mode. I should've trimmed the email better.
>
> I meant to ignore the cost limits if we're within a hundred million or
> so of the stopLimit. Per what Hannu mentioned, there does not seem to
> be a great need with current versions of PostgreSQL to restart in the
> instance in single-user mode. VACUUM still works once we're beyond the
> stopLimit. It's just commands that need to generate a new XID that'll
> fail with the error message mentioned by Hannu.
I am investigating a possibility of introducing a special "Restricted
Maintenance
Mode" to let admin mitigate after xidStopLimit, maybe for another 0.5M txids,
by doing things like
* dropping an index - to make vacuum faster
* dropping a table - sometimes it is better to drop a table in order to get the
production database functional again instead of waiting hours for the vacuum
to finish.
And then later restore it from backup or maybe access it from a read-only
clone of the database via FDW.
* drop a stale replication slot which is holding back vacuum
To make sure that this will not accidentally just move xidStopLimit to 0.5M for
users who run main workloads as a superuser (they do exists!) this mode should
be restricted to
* only superuser
* only a subset of commands / functions
* be heavily throttled to avoid running out of TXIDs, maybe 1-10 xids per second
* maybe require also setting a GUC to be very explicit
> > I agree with your other idea, that of kicking in a more aggressive
> > autovacuum if it's not dealing with things fast enough. Maybe even on
> > an incremental way - that is run with the default, then at another
> > threshold drop them to half, and at yet another threshold drop them to
> > 0. I agree that pretty much anything is better than forcing the user
> > into single user mode.
>
> OK cool. I wondered if it should be reduced incrementally or just
> switch off the cost limit completely once we're beyond
> ShmemVariableCache->xidStopLimit.
Abrupt change is something that is more likely to make the user/DBA notice
that something is going on. I have even been thinking about deliberate
throttling to make the user notice / pay attention.
> If we did want it to be incremental
> then if we had say ShmemVariableCache->xidFastVacLimit, which was
> about 100 million xids before xidStopLimit, then the code could adjust
> the sleep delay down by the percentage through we are from
> xidFastVacLimit to xidStopLimit.
>
> However, if we want to keep adjusting the sleep delay then we need to
> make that work for vacuums that are running already. We don't want to
> call ReadNextTransactionId() too often, but maybe if we did it once
> per 10 seconds worth of vacuum_delay_point()s. That way we'd never do
> it for vacuums already going at full speed.
There are already samples of this in code, for example the decision to
force-start disabled autovacuum is considered after every 64k transactions.
There is a related item in https://commitfest.postgresql.org/32/2983/ .
When that gets done, we could drive the adjustments from autovacuum.c by
adding the remaining XID range adjustment to existing worker delay adjust
mechanisms in autovac_balance_cost() and signalling the autovacuum
backend to run the adjustment every few seconds once we are in the danger
zone.
Cheers
Hannu
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Nancarrow | 2021-03-03 12:20:22 | Re: Parallel INSERT (INTO ... SELECT ...) |
Previous Message | Dilip Kumar | 2021-03-03 11:44:44 | Re: Parallel INSERT (INTO ... SELECT ...) |