From: | Etsuro Fujita <etsuro(dot)fujita(at)gmail(dot)com> |
---|---|
To: | Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: postgres_fdw: commit remote (sub)transactions in parallel during pre-commit |
Date: | 2021-11-07 09:06:04 |
Message-ID: | CAPmGK17XczZmzU=0mhk2_oG9Omo1p-uZCNdZAh_3vuC6+YboiA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Nov 1, 2021 at 3:22 PM Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> wrote:
> On 2021/10/31 18:05, Etsuro Fujita wrote:
> > The patch is pretty simple: if a server option added
> > by the patch “parallel_commit” is enabled,
>
> Could you tell me why the parameter is necessary?
> Can't we always enable the feature?
I don’t think parallel commit would cause performance degradation,
even in the case when there is just a single remote (sub)transaction
to commit when called from pgfdw_xact_callback
(pgfdw_subxact_callback), so I think it might be OK to enable it by
default. But my concern about doing so is the remote side: during
those functions, if there are a lot of (sub)transactions on a single
remote server that need to be committed, parallel commit would
increase the remote server’s load at (sub)transaction end than serial
commit, which is the existing implementation, as the requests to
commit those (sub)transactions are sent to the remote server at the
same time; which some users might want to avoid.
> > I think we could extend this to abort cleanup of remote
> > (sub)transactions during post-abort. Anyway, I think this is useful,
> > so I’ll add this to the upcoming commitfest.
>
> Thanks!
I'll update the patch as such in the next version.
> + /* Consume whatever data is available from the socket */
> + if (!PQconsumeInput(conn))
> + pgfdw_report_error(ERROR, NULL, conn, false, sql);
>
> Without the patch, PQconsumeInput() is not called before pgfdw_get_result()
> But could you tell me why you added PQconsumeInput() there?
The reason is that there might be the result already before calling
pgfdw_get_result(), in which case PQconsumeInput() followed by
PQisBusy() would allow us to call PQgetResult() without doing
WaitLatchOrSocket(), which I think is rather expensive.
> When ignore_errors argument is true, the error reported by
> PQconsumeInput() should be ignored?
I’m not sure about that, because the error might be caused by e.g.,
OOM in the local server, in which case I don’t think it is safe to
ignore it and continue the (sub)transaction-end processing.
Thanks for reviewing!
Best regards,
Etsuro Fujita
From | Date | Subject | |
---|---|---|---|
Next Message | Etsuro Fujita | 2021-11-07 09:13:36 | Re: postgres_fdw: commit remote (sub)transactions in parallel during pre-commit |
Previous Message | Pavel Stehule | 2021-11-07 07:23:01 | Re: [PROPOSAL] new diagnostic items for the dynamic sql |