From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | "tsunakawa(dot)takay(at)fujitsu(dot)com" <tsunakawa(dot)takay(at)fujitsu(dot)com> |
Cc: | Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com>, Zhihong Yu <zyu(at)yugabyte(dot)com>, Ibrar Ahmed <ibrar(dot)ahmad(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com>, Masahiko Sawada <masahiko(dot)sawada(at)2ndquadrant(dot)com>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, "ashutosh(dot)bapat(dot)oss(at)gmail(dot)com" <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com>, "amit(dot)kapila16(at)gmail(dot)com" <amit(dot)kapila16(at)gmail(dot)com>, "m(dot)usama(at)gmail(dot)com" <m(dot)usama(at)gmail(dot)com>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>, "sulamul(at)gmail(dot)com" <sulamul(at)gmail(dot)com>, "alvherre(at)2ndquadrant(dot)com" <alvherre(at)2ndquadrant(dot)com>, "thomas(dot)munro(at)gmail(dot)com" <thomas(dot)munro(at)gmail(dot)com>, "ildar(at)adjust(dot)com" <ildar(at)adjust(dot)com>, "horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp" <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp>, "chris(dot)travers(at)adjust(dot)com" <chris(dot)travers(at)adjust(dot)com>, "ishii(at)sraoss(dot)co(dot)jp" <ishii(at)sraoss(dot)co(dot)jp> |
Subject: | Re: Transactions involving multiple postgres foreign servers, take 2 |
Date: | 2021-06-16 16:07:36 |
Message-ID: | CA+TgmoapYrULdCtRByft5mz+MnhYz=MoM5iHNFWo=cJ-KOZPrg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jun 15, 2021 at 5:51 AM tsunakawa(dot)takay(at)fujitsu(dot)com
<tsunakawa(dot)takay(at)fujitsu(dot)com> wrote:
> Postgres can do that, but other implementations can not necessaily do it, I'm afraid. But before that, the FDW interface documentation doesn't describe anything about how to handle interrupts. Actually, odbc_fdw and possibly other FDWs don't respond to interrupts.
Well, I'd consider that a bug.
> What we're talking here is mainly whether commit should return success or failure when some participants failed to commit in the second phase of 2PC. That's new to Postgres, isn't it? Anyway, we should respect existing relevant specifications and (well-known) implementations before we conclude that we have to devise our own behavior.
Sure ... but we can only decide to do things that the implementation
can support, and running code that might fail after we've committed
locally isn't one of them.
> We talked about that, and unfortunately, I haven't seen a good and feasible idea to enhance the current approach that involves the resolver from the beginning of 2PC processing. Honestly, I don't understand why such a "one prepare, one commit in turn" serialization approach can be allowed in PostgreSQL where developers pursue best performance and even tries to refrain from adding an if statement in a hot path. As I showed and Ikeda-san said, other implementations have each client session send prepare and commit requests. That's a natural way to achieve reasonable concurrency and performance.
I think your comparison here is quite unfair. We work hard to add
overhead in hot paths where it might cost, but the FDW case involves a
network round-trip anyway, so the cost of an if-statement would surely
be insignificant. I feel like you want to assume without any evidence
that a local resolver can never be quick enough, even thought the cost
of IPC between local processes shouldn't be that high compared to a
network round trip. But you also want to suppose that we can run code
that might fail late in the commit process even though there is lots
of evidence that this will cause problems, starting with the code
comments that clearly say so.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2021-06-16 16:11:45 | Re: snapshot too old issues, first around wraparound and then more. |
Previous Message | Peter Geoghegan | 2021-06-16 16:03:29 | Re: pg14b1 stuck in lazy_scan_prune/heap_page_prune of pg_statistic |