Re: vacuum vs heap_update_tuple() and multixactids

From: Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>
To: Andres Freund <andres(at)anarazel(dot)de>
Cc: PostgreSQL Bugs <pgsql-bugs(at)postgresql(dot)org>, pgsql-hackers(at)postgresql(dot)org, Robert Haas <robertmhaas(at)gmail(dot)com>
Subject: Re: vacuum vs heap_update_tuple() and multixactids
Date: 2017-12-19 18:35:12
Message-ID: 20171219183512.5clw3fxztholw4vq@alvherre.pgsql
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs pgsql-hackers

Andres Freund wrote:

> I think the bugfix is going to have to essentially be something similar
> to FreezeMultiXactId(). I.e. when reusing an old tuple's xmax for a new
> tuple version, we need to prune dead multixact members. I think we can
> do so unconditionally and rely on multixact id caching layer to avoid
> unnecesarily creating multis when all members are the same.

Actually, isn't the cache subject to the very same problem? If you use
a value from the cache, it could very well be below whatever the cutoff
multi was chosen in the other process ...

--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Andres Freund 2017-12-19 18:42:11 Re: vacuum vs heap_update_tuple() and multixactids
Previous Message Andres Freund 2017-12-19 18:31:14 vacuum vs heap_update_tuple() and multixactids

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2017-12-19 18:35:16 Re: Using ProcSignal to get memory context stats from a running backend
Previous Message Andres Freund 2017-12-19 18:31:14 vacuum vs heap_update_tuple() and multixactids