From: | legrand legrand <legrand_legrand(at)hotmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: pg_stat_statements : how to catch non successfully finished statements ? |
Date: | 2020-05-01 09:26:13 |
Message-ID: | 1588325173462-0.post@n3.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Tom Lane-2 wrote
> legrand legrand <
> legrand_legrand@
> > writes:
>> Tom Lane-2 wrote
>>> The hard part here is that you have to be really careful what you do in
>>> a PG_CATCH block, because the only thing you know for sure about the
>>> backend's state is that it's not good. Catalog fetches are right out,
>>> and anything that might itself throw an error had best be avoided as
>>> well. (Which, among other things, means that examining executor state
>>> would be a bad idea, and I'm not even sure you'd want to traverse the
>>> plan
>>> tree.)
>>> I'm not convinced that it's practical for pg_stat_statements to make a
>>> new
>>> shared hashtable entry under those constraints. But figuring out how to
>>> minimize the risks around that is the stumbling block, not lack of a
>>> hook.
>
>> As far as I have been testing this with *cancelled* queries (Cancel,
>> pg_cancel_backend(), statement_timeout, ...), I haven't found any
>> problem.
>> Would limiting the PG_CATCH block to thoses *cancelled* queries
>> and *no other error*, be an alternate solution ?
>
> I do not see that that would make one iota of difference to the risk that
> the executor state tree is inconsistent at the instant the error is
> thrown. You can't test your way to the conclusion that it's safe, either
> (much less that it'd remain safe); your test cases surely haven't hit
> every CHECK_FOR_INTERRUPTS call in the backend.
>
> regards, tom lane
new try:
Considering that executor state tree is limited to QueryDesc->estate,
that would mean that rows processed can not be trusted, but that
queryid, buffers and *duration* (that is the more important one)
can still be used ?
Knowing that shared hashtable entries are now (in pg13) created during
planning time. There is no need to create a new one for execution error:
just update counters (current ones or new columns like "errors" ,
"total_error_time", ... added to pg_stat_statements view).
Is that better ?
Regards
PAscal
--
Sent from: https://www.postgresql-archive.org/PostgreSQL-general-f1843780.html
From | Date | Subject | |
---|---|---|---|
Next Message | André Hänsel | 2020-05-01 10:47:44 | How to use pg_waldump? |
Previous Message | Paul Förster | 2020-05-01 08:59:06 | Re: How to move a 11.4 cluster to another Linux host, but empty? |