From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | AgentM <agentm(at)themactionfaction(dot)com> |
Cc: | postgres hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Prepared statements considered harmful |
Date: | 2006-08-31 16:04:51 |
Message-ID: | 8619.1157040291@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
AgentM <agentm(at)themactionfaction(dot)com> writes:
> On Aug 31, 2006, at 11:18 , mark(at)mark(dot)mielke(dot)cc wrote:
>> I'm attempting to understand why prepared statements would be used for
>> long enough for tables to change to a point that a given plan will
>> change from 'optimal' to 'disastrous'.
> Scenario: A web application maintains a pool of connections to the
> database. If the connections have to be regularly restarted due to a
> postgres implementation detail (stale plans), then that is a database
> deficiency.
The two major complaints that I've seen are
* plpgsql's prepared plans don't work at all for scenarios involving
temp tables that are created and dropped in each use of the function.
Then, the plan needs to be regenerated on every successive call.
Right now we tell people they have to use EXECUTE, which is painful
and gives up unnecessary amounts of performance (because it might
well be useful to cache a plan for the lifespan of the table).
* for parameterized queries, a generic plan gives up too much
performance compared to one generated for specific constant parameter
values.
Neither of these problems have anything to do with statistics getting
stale.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-08-31 16:10:35 | Re: Prepared statements considered harmful |
Previous Message | mark | 2006-08-31 15:53:24 | Re: Prepared statements considered harmful |