From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Regression tests fail once XID counter exceeds 2 billion |
Date: | 2011-11-13 23:16:48 |
Message-ID: | 28621.1321226208@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
While investigating bug #6291 I was somewhat surprised to discover
$SUBJECT. The cause turns out to be this kluge in alter_table.sql:
select virtualtransaction
from pg_locks
where transactionid = txid_current()::integer
which of course starts to fail with "integer out of range" as soon as
txid_current() gets past 2^31. Right now, since there is no cast
between xid and any integer type, and no comparison operator except the
dubious xideqint4 one, the only way we could fix this is something
like
where transactionid::text = (txid_current() % (2^32))::text
which is surely pretty ugly. Is it worth doing something less ugly?
I'm not sure if there are any other use-cases for this type of
comparison, but if there are, seems like it would be sensible to invent
a function along the lines of
txid_from_xid(xid) returns bigint
that plasters on the appropriate epoch value for an
assumed-to-be-current-or-recent xid, and returns something that squares
with the txid_snapshot functions. Then the test could be coded without
kluges as
where txid_from_xid(transactionid) = txid_current()
Thoughts?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Florian Pflug | 2011-11-13 23:45:04 | Re: why do we need two snapshots per query? |
Previous Message | Robert Haas | 2011-11-13 23:13:05 | Re: why do we need two snapshots per query? |