From: | Rahila Syed <rahilasyed90(at)gmail(dot)com> |
---|---|
To: | Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, torikoshia <torikoshia(at)oss(dot)nttdata(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Enhancing Memory Context Statistics Reporting |
Date: | 2025-01-24 13:47:35 |
Message-ID: | CAH2L28u7=fcgnY8bpM87moiJxt++wqWZXh2HxFabYjiHSg76Cg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
>
> Just idea; as an another option, how about blocking new requests to
> the target process (e.g., causing them to fail with an error or
> returning NULL with a warning) if a previous request is still pending?
> Users can simply retry the request if it fails. IMO failing quickly
> seems preferable to getting stuck for a while in cases with concurrent
> requests.
>
> Thank you for the suggestion. I agree that it is better to fail early
and avoid
waiting for a timeout in such cases. I will add a "pending request" tracker
for
this in shared memory. This approach will help prevent sending a concurrent
request if a request for the same backend is still being processed.
IMO, one downside of throwing an error in such cases is that the users
might
wonder if they need to take a corrective action, even though the issue is
actually
going to solve itself and they just need to retry. Therefore, issuing a
warning
or displaying previously updated statistics might be a better alternative
to throwing
an error.
Thank you,
Rahila Syed
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Jones | 2025-01-24 13:48:49 | Re: XMLDocument (SQL/XML X030) |
Previous Message | Alvaro Herrera | 2025-01-24 13:37:18 | Re: Quadratic planning time for ordered paths over partitioned tables |