From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Should work_mem be stable for a prepared statement? |
Date: | 2025-02-27 18:42:27 |
Message-ID: | 2a2b28e2ea677e3cec2850b2dd38b467bf1291fb.camel@j-davis.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
As a part of this discussion:
James pointed out something interesting, which is that a prepared
statement enforces the work_mem limit at execution time, which might be
different from the work_mem at the time the statement was prepared.
For instance:
SET work_mem='1GB';
PREPARE foo AS ...; -- plans using 1GB limit
SET work_mem='1MB';
EXECUTE foo; -- enforces 1MB limit
My first reaction is that it's not right because the costing for the
plan is completely bogus with a different work_mem. It would make more
sense to me if we either (a) enforced work_mem as it was at the time of
planning; or (b) replanned if executed with a different work_mem
(similar to how we replan sometimes with different parameters).
But I'm not sure whether someone might be relying on the existing
behavior?
If we were to implement (a) or (b), we couldn't use the work_mem global
directly, we'd need to save it in the plan, and enforce using the
plan's saved value. But that might be a good change anyway. In theory
we might need to do something similar for hash_mem_multiplier, too.
Regards,
Jeff Davis
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2025-02-27 18:58:55 | Re: Update docs for UUID data type |
Previous Message | David G. Johnston | 2025-02-27 18:37:58 | Re: [Doc] Improve hostssl related descriptions and option presentation |