From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Maxim Orlov <orlovmg(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Add 64-bit XIDs into PostgreSQL 15 |
Date: | 2022-01-04 19:32:20 |
Message-ID: | 20220104193220.GL15820@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* Maxim Orlov (orlovmg(at)gmail(dot)com) wrote:
> Long time wraparound was a really big pain for highly loaded systems. One
> source of performance degradation is the need to vacuum before every
> wraparound.
> And there were several proposals to make XIDs 64-bit like [1], [2], [3] and
> [4] to name a few.
>
> The approach [2] seems to have stalled on CF since 2018. But meanwhile it
> was successfully being used in our Postgres Pro fork all time since then.
> We have hundreds of customers using 64-bit XIDs. Dozens of instances are
> under load that require wraparound each 1-5 days with 32-bit XIDs.
> It really helps the customers with a huge transaction load that in the case
> of 32-bit XIDs could experience wraparounds every day. So I'd like to
> propose this approach modification to CF.
>
> PFA updated working patch v6 for PG15 development cycle.
> It is based on a patch by Alexander Korotkov version 5 [5] with a few
> fixes, refactoring and was rebased to PG15.
Just to confirm as I only did a quick look- if a transaction in such a
high rate system lasts for more than a day (which certainly isn't
completely out of the question, I've had week-long transactions
before..), and something tries to delete a tuple which has tuples on it
that can't be frozen yet due to the long-running transaction- it's just
going to fail?
Not saying that I've got any idea how to fix that case offhand, and we
don't really support such a thing today as the server would just stop
instead, but if I saw something in the release notes talking about PG
moving to 64bit transaction IDs, I'd probably be pretty surprised to
discover that there's still a 32bit limit that you have to watch out for
or your system will just start failing transactions. Perhaps that's a
worthwhile tradeoff for being able to generally avoid having to vacuum
and deal with transaction wrap-around, but I have to wonder if there
might be a better answer. Of course, also wonder about how we're going
to document and monitor for this potential issue and what kind of
corrective action will be needed (kill transactions older than a cerain
amount of transactions..?).
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2022-01-04 19:42:44 | Re: generalized conveyor belt storage |
Previous Message | Tom Lane | 2022-01-04 19:19:21 | Re: Index-only scan for btree_gist turns bpchar to char |