From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> |
Cc: | Nikhil Sontakke <nikhils(at)2ndquadrant(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Logical Decoding and HeapTupleSatisfiesVacuum assumptions |
Date: | 2018-01-29 10:38:22 |
Message-ID: | CANP8+j+56fcvkWbnERJug+cCb4VCwHwri=hu6TNQk08T-rX8AQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 23 January 2018 at 19:17, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com> wrote:
> I am not sure if this helps streaming use-case though as
> there is not going to be any external transaction management involved there.
So, I think we need some specific discussion of what to do in that case.
Streaming happens only with big transactions and only for short periods.
The problem only occurs when we are decoding and we hit a catalog
table change. Processing of that is short, then we continue. So it
seems perfectly fine to block aborts in those circumstances. We can
just mark that state in a in-memory array of
StreamingDecodedTransactions that has size SizeOf(TransactionId) *
MaxNumWalSenders.
We can add a check into RecordTransactionAbort() just before the
critical section to see if we are currently processing a
StreamingDecodedTransaction and if so, poll until we're OK to abort.
The check will be quick and the abort path is not considered one we
need to optimize.
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Kyotaro HORIGUCHI | 2018-01-29 10:40:23 | Re: [HACKERS] Restricting maximum keep segments by repslots |
Previous Message | Masahiko Sawada | 2018-01-29 10:28:22 | Re: PATCH: Exclude unlogged tables from base backups |