From: | Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Peter Geoghegan <pg(at)bowt(dot)ie> |
Subject: | Re: xid wraparound danger due to INDEX_CLEANUP false |
Date: | 2020-04-17 06:18:51 |
Message-ID: | CA+fd4k7exJ3XS7c4NiPXwi-fggg5Y9LfZe+tRxFfOjp13Y8Dvw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 17 Apr 2020 at 02:58, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2020-04-16 16:30:02 +0900, Masahiko Sawada wrote:
> > For btree indexes, IIRC skipping index cleanup could not be a cause of
> > corruption, but be a cause of index bloat since it leaves recyclable
> > pages which are not marked as recyclable. The index bloat is the main
> > side effect of skipping index cleanup. When user executes VACUUM with
> > INDEX_CLEANUP to reclaim index garbage, such pages will also be
> > recycled sooner or later? Or skipping index cleanup can be a cause of
> > recyclable page never being recycled?
>
> Well, it depends on what you define as "never". Once the xids on the
> pages have wrapped around, the page level xids will appear to be from
> the future for a long time. And the metapage xid appearing to be from
> the future will prevent some vacuums from actually doing the scan too,
> even if INDEX_CLEANUP is reenabled. So a VACUUM, even with
> INDEX_CLEANUP on, will not be able to recycle those pages anymore. At
> some point the wrapped around xids will be "current" again, if there's
> enough new xids.
>
>
> It's no ok for vacuumlazy.c to make decisions like this. I think the
> INDEX_CLEANUP logic clearly needs to be pushed down into the
> amvacuumcleanup callbacks, and it needs to be left to the index AMs to
> decide what the correct behaviour is.
I wanted to clarify the impact of this bug. I agree with you.
On Fri, 17 Apr 2020 at 07:49, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2020-04-16 13:28:00 -0700, Peter Geoghegan wrote:
> > > And, what's worse, in the INDEX_CLEANUP off case, future VACUUMs with
> > > INDEX_CLEANUP on might not even visit the index. As there very well
> > > might not be many dead heap tuples around anymore (previous vacuums with
> > > cleanup off will have removed them), the
> > > vacuum_cleanup_index_scale_factor logic may prevent index vacuums.
I think this doesn't happen because, in the INDEX_CLEANUP off case,
vacuum marks linepointers of dead tuples as dead but leaves them.
Therefore future VACUUMs with INDEX_CLEANUP on will see these dead
linepointers and invoke ambulkdelete.
> > I guess that they should visit the metapage to see if they need to do
> > that much. That would allow us to fix the problem while mostly
> > honoring INDEX_CLEANUP off, I think.
>
> Yea. _bt_vacuum_needs_cleanup() needs to check if
> metad->btm_oldest_btpo_xact is older than the FreezeLimit computed by
> vacuum_set_xid_limits() and vacuum the index if so even if INDEX_CLEANUP
> false.
Agreed. So _bt_vacuum_needs_cleanup() would become something like the following?
if (metad->btm_version < BTREE_NOVAC_VERSION)
result = true;
else if (TransactionIdIsvaid(metad->btm_oldest_btpo_xact) &&
TransactionIdPrecedes(metad->btm_oldest_btpo_xact,
FreezeLimit))
result = true;
else if (index_cleanup_disabled)
result = false;
else if (TransactionIdIsValid(metad->btm_oldest_btpo_xact) &&
TransactionIdPrecedes(metad->btm_oldest_btpo_xact,
RecentGlobalXmin))
result = true;
else
result = determine based on vacuum_cleanup_index_scale_factor;
Or perhaps we can change _bt_vacuum_needs_cleanup() so that it does
index cleanup if metad->btm_oldest_btpo_xact is older than the
FreezeLimit *and* it's an aggressive vacuum.
Anyway, if we change IndexVacuumInfo to tell AM that INDEX_CLEANUP
option is disabled and FreezeLimit a problem is that it would break
compatibility
Regards,
--
Masahiko Sawada http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Rajkumar Raghuwanshi | 2020-04-17 06:39:51 | Re: ERROR: could not open file "pg_tblspc/ issue with replication setup. |
Previous Message | Kyotaro Horiguchi | 2020-04-17 06:18:31 | Possible cache reference leak by removeExtObjInitPriv |