From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | dmigowski(at)ikoffice(dot)de |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #14726: Memory consumption of PreparedStatement |
Date: | 2017-07-02 16:36:26 |
Message-ID: | 29649.1499013386@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
dmigowski(at)ikoffice(dot)de writes:
> I concluded that the query plan of the statement, which I uploaded to despez
> would result in 30MB of memory footprint!
> https://explain.depesz.com/s/gN2
Really? There's 132 plan nodes in that plan. I could believe that we're
eating several KB per plan node, but not 220KB per node.
> * How can I determine the memory footprint of prepared statements?
It's not something we track specifically.
> * Wouldn't it be useful if we could give a memory limit for prepared
> statements on the server, so that PostgreSQL automatically evicts them if
> more are prepared, maybe by using an LRU list?
I'd be against that, as it pretty much would destroy the point of having
prepared statements at all --- if the server forgets them at random, or
even just has to re-prepare them unexpectedly, it's not holding up its
end of the contract. It's the application's job to use that resource
in a prudent manner, just like any other SQL resource.
> PostgreSQL could automatically replan them when they get used again, I
> think.
This view seems like a poster child for why that would be a bad idea
--- that query has to take a mighty long time to plan.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | rthomques | 2017-07-02 18:47:08 | BUG #14727: Inicial running of Postgres. |
Previous Message | Dean Rasheed | 2017-07-02 10:02:54 | Re: BUG #14725: Partition constraint issue on multiple columns as the key of range partition |