From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Aggressive freezing versus caching of pg_proc entries |
Date: | 2010-02-01 00:02:14 |
Message-ID: | 18903.1264982534@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
There are various places in the backend that rely on comparing a catalog
tuple's TID and XMIN to values saved in a cache entry in order to detect
whether the tuple changed since the cache entry was made. (So far as
I can find, the only places that do this are looking at pg_proc entries
--- fmgr.c as well as each of the standard PLs use this technique for
checking validity of function-related cache entries.)
It strikes me that aggressive freezing of pg_proc entries could break
this logic; where "aggressive" means "freezing a tuple in less than the
inter-reference time of somebody's cache entry". Consider a sequence
like this:
1. A pg_proc tuple is frozen, so its xmin = FrozenXID.
2. Somebody caches a function definition based on the tuple.
3. Someone else updates the tuple twice; the second update by chance
puts the updated tuple back at its original TID (quite likely with HOT).
4. Aggressive freeze of pg_proc sets the tuple's xmin back to FrozenXID.
5. The first somebody uses the function again. He'll see same TID and
XMIN as before, hence fail to realize tuple is changed.
Another path to failure is that the tuple could be dropped entirely,
then replaced by one that unluckily has same OID and is placed at same
TID. Again, freezing destroys all trace that this happened.
The reason this occurred to me is that I was thinking about the
consequences of applying cluster-like VACUUM FULL to pg_proc. That
creates a third failure path: a single update followed by clustering
that unluckily drops tuple back at its old TID. But as shown above,
we're already at risk without that.
I'm inclined to think that we should get rid of this caching method
in favor of having fmgr.c and the PLs hook into sinval cache flush
callbacks. It's not high priority, but given that various people
have advocated aggressive freezing policies, it seems there's some
risk in that. I also wonder if it wouldn't be better to centralize
this logic instead of having five different implementations of it
(or more --- likely some third-party PLs have copied that logic...)
Anyway, not proposing to fix this now, but maybe it should be on the
TODO list.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2010-02-01 01:13:32 | Re: odd output in initdb |
Previous Message | KaiGai Kohei | 2010-01-31 23:41:11 | Re: [BUG?] strange behavior in ALTER TABLE ... RENAME TO on inherited columns |