From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, Suraj Kharage <suraj(dot)kharage(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: statement_timeout is not working as expected with postgres_fdw |
Date: | 2017-05-04 14:38:29 |
Message-ID: | CA+TgmoaKPNnvatjD2U2QHcPwYGORUBUT-Omj3gagxkt1aUTjXQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, May 4, 2017 at 7:13 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> In pgfdw_xact_callback, if the execution of ABORT TRANSACTION fails
> due to any reason then I think it will close the connection. The
> relavant code is:
> if (PQstatus(entry->conn) != CONNECTION_OK ||
> PQtransactionStatus(entry->conn) != PQTRANS_IDLE)
>
> Basically, if abort transaction fails then transaction status won't be
> PQTRANS_IDLE.
I don't think that's necessarily true. Look at this code:
/* Rollback all remote subtransactions during abort */
snprintf(sql, sizeof(sql),
"ROLLBACK TO SAVEPOINT s%d; RELEASE SAVEPOINT s%d",
curlevel, curlevel);
res = PQexec(entry->conn, sql);
if (PQresultStatus(res) != PGRES_COMMAND_OK)
pgfdw_report_error(WARNING, res, entry->conn, true, sql);
else
PQclear(res);
If that sql command somehow returns a result status other than
PGRES_COMMAND_OK, like say due to a local OOM failure or whatever, the
connection is PQTRANS_IDLE but the rollback wasn't actually done.
> Also, does it matter if pgfdw_subxact_callback fails
> during execution of "ROLLBACK TO SAVEPOINT", because anyway user needs
> to execute rollback to savepoint command before proceeding and that
> should be good enough?
I don't understand this. If pgfdw_subxact_callback doesn't roll the
remote side back to the savepoint, how is it going to get done later?
There's no code in postgres_fdw other than that code to issue the
rollback.
> About patch:
>
> + /* Rollback all remote subtransactions during abort */
> + snprintf(sql, sizeof(sql),
> + "ROLLBACK TO SAVEPOINT s%d; RELEASE SAVEPOINT s%d",
> + curlevel, curlevel);
> + if (!pgfdw_exec_cleanup_query(entry->conn, sql))
> + abort_cleanup_failure = true;
>
> If the above execution fails due to any reason, then it won't allow
> even committing the mail transaction which I am not sure is the right
> thing to do.
Well, as I said in my reply to Tushar, I think it's the only correct
thing to do. Suppose you do the following:
(1) make a change to the foreign table
(2) enter a subtransaction
(3) make another change to the foreign table
(4) roll back the subtransaction
(5) commit the transaction
If the remote rollback gets skipped in step 4, it is dead wrong for
step 5 to commit the remote transaction anyway. Both the change made
in step (1) and the change made in step (3) would get made on the
remote side, which is 100% wrong. Given the failure of the rollback
in step 4, there is no way to achieve the desired outcome of
committing the step (1) change and rolling back the step (3) change.
The only way forward is to roll back everything - i.e. force a
toplevel abort.
> #include "utils/memutils.h"
> -
> +#include "utils/syscache.h"
>
> Looks like spurious line removal
OK.
>> - For bonus points, give pgfdw_exec_query() an optional timeout
>> argument, and set it to 30 seconds or so when we're doing abort
>> cleanup. If the timeout succeeds, it errors out (which again
>> re-enters the abort path, dropping the connection and forcing outer
>> transaction levels to abort as well).
>
> Why do we need this 30 seconds timeout for abort cleanup failures?
> Isn't it sufficient if we propagate the error only on failures?
Well, the problem is that right now we're waiting for vast amounts of
time when the remote connection dies. First, the user notices that
the command is running for too long and hits ^C. At that point, the
connection is probably already dead, and the user's been waiting for a
while already. Then, we wait for PQcancel() to time out. Then, we
wait for PQexec() to time out. The result of that, as Suraj mentioned
in the original email, is that it takes about 16 minutes for the
session to recover when the connection dies, even with keepalive
settings and connection timeout set all the way to the minimum. The
goal here is to make it take a more reasonable time to recover.
Without modifying libpq, we can't do anything about the need to wait
for PQcancel() to time out, but we can improve the rest of it. If we
removed that 30-second timeout, we would win in the case where either
ABORT TRANSACTION or ROLLBACK; RELEASE eventually succeeds but takes
more than 30 seconds. In that case, the change you are proposing
would allow us to reuse the connection instead of dropping it, and
would prevent a forced toplevel abort when subtransactions are in use.
However, the cost would be that when one of those commands never
succeeds, we would wait a lot longer before finishing a transaction
abort. I argue that if the remote server fails to respond to one of
those commands within 30 seconds, it's probably dead or nearly dead,
and we're probably better off just treating it as dead rather than
waiting longer. That's a judgement call, of course.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-05-04 14:40:28 | Re: transition table behavior with inheritance appears broken (was: Declarative partitioning - another take) |
Previous Message | Tom Lane | 2017-05-04 14:22:01 | Re: Reducing runtime of stats regression test |