From: | Rafael Thofehrn Castro <rafaelthca(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | torikoshia <torikoshia(at)oss(dot)nttdata(dot)com>, Andrei Lepikhov <lepihov(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Proposal: Progressive explain |
Date: | 2025-04-01 06:23:43 |
Message-ID: | CAG0ozMrqMKcbcU1L9ON44himeEd=6qZ2_jH1fvF0yYD0zSgDYQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello again,
> ERROR: could not attach to dynamic shared area
In addition to that refactoring issue, the current patch had a race
condition in pg_stat_progress_explain to access the DSA of a process
running a query that gets aborted.
While discussing with Robert we agreed that it would be wiser to take
a step back and change the strategy used to share progressive explain
data in shared memory.
Instead of using per backend's DSAs shared via a hash structure I now
define a dsa_pointer and a LWLock in each backend's PGPROC.
A global DSA is created by the first backend that attempts to use
the progressive explain feature. After the DSA is created, subsequent
uses of the feature will just allocate memory there and reference
via PGPROC's dsa_pointer.
This solves the race condition reported by Torikoshi and improves
concurrency performance as now we don't have a global LWLock
controlling shared memory access, but a per-backend LWLock.
Performed the same tests done by Torikoshi and it looks like we are
good now. Even with more frequent inspects in pg_stat_progress_explain
(\watch 0.01).
Rafael.
Attachment | Content-Type | Size |
---|---|---|
v15-0001-Proposal-for-progressive-explains.patch | application/octet-stream | 86.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Guo | 2025-04-01 06:34:22 | Re: Reduce "Var IS [NOT] NULL" quals during constant folding |
Previous Message | Alvaro Herrera | 2025-04-01 06:22:41 | Re: Test to dump and restore objects left behind by regression |