From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
Cc: | Yura Sokolov <y(dot)sokolov(at)postgrespro(dot)ru>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Add PortalDrop in exec_execute_message |
Date: | 2021-06-09 17:25:59 |
Message-ID: | 1422606.1623259559@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I wrote:
> I'm still wondering though why Yura is observing resources remaining
> held by an executed-to-completion Portal. I think investigating that
> might be more useful than tinkering with pipeline mode.
I got a chance to look into this finally. The lens I've been looking
at this through is "why are we still holding any buffer pins when
ExecutorRun finishes?". Normal table scan nodes won't do that.
It turns out that the problem is specific to SELECT FOR UPDATE, and
it happens because nodeLockRows is not careful to shut down the
EvalPlanQual mechanism it uses before returning NULL at the end of
a scan. If EPQ has been fired, it'll be holding a tuple slot
referencing whatever tuple it was last asked about. The attached
trivial patch seems to take care of the issue nicely, while adding
little if any overhead. (A repeat call to EvalPlanQualEnd doesn't
do much.)
regards, tom lane
Attachment | Content-Type | Size |
---|---|---|
shut-down-EPQ-when-LockRows-stops.patch | text/x-diff | 800 bytes |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2021-06-09 17:36:06 | Re: Adjust pg_regress output for new long test names |
Previous Message | Julien Rouhaud | 2021-06-09 17:23:28 | Re: Estimating HugePages Requirements? |