From: | nconway(at)klamath(dot)dyndns(dot)org (Neil Conway) |
---|---|
To: | PostgreSQL Patches <pgsql-patches(at)postgresql(dot)org> |
Subject: | prepareable statements |
Date: | 2002-06-28 17:41:54 |
Message-ID: | 20020628174154.GA5353@klamath.dyndns.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-patches |
The attached patch implements per-backend prepareable statements.
The syntax is:
PREPARE name_of_stmt(param_types) FROM <some query>;
EXECUTE name_of_stmt [INTO relation] [USING args];
DEALLOCATE [PREPARE] name_of_stmt;
I don't really like the 'FROM' keyword in PREPARE (I was planning to
use 'AS'), but that's what SQL92 specifies.
The PREPARE keyword in DEALLOCATE is ignored, for SQL92 compliance.
You can specify EXECUTE ... INTO, using the same syntax as SELECT
INTO, to store the result set from the EXECUTE in a relation.
The syntax is largely SQL92 compliant, but not totally. I'm not sure how
the SQL spec expects parameters to be set up in PREPARE, but I doubt
it's the same way I used. And the SQL92 spec for EXECUTE is functionally
similar, but uses a different syntax (EXECUTE ... USING INTO <rel>, I
think). If someone can decipher the spec on these two points and
can suggest what the proper syntax should be, let me know.
Parameters are fully supported -- for example:
PREPARE q1(text) FROM SELECT * FROM pg_class WHERE relname = $1;
EXECUTE q1 USING 'abc';
For simple queries such as the preceding one, using PREPARE followed
by EXECUTE is about 10% faster than continuosly using SELECT (when
executing 100,000 statements). When executing more complex statements
(such as the monstrous 12 table join used by the JDBC driver for
getting some meta-data), the performance improvement is more drastic
(IIRC it was about 100x in that case, when executing 75 statements).
I've included some regression tests for the work -- when/if the
patch is applied I'll write the documentation.
The patch stores queries in a hash table in TopMemoryContext. I
considered replacing the hash table with a linked list and
searching through that linearly, but I decided it wasn't worth
the bother (since the # of prepared statements is likely to be
very small, I would expect a linked list to outperform a hash
table in the common case). If you feel strongly one way or another,
let me know.
Also, I'm not entirely sure my approach to memory management is
correct. Each entry in the hash table stores its data in its
own MemoryContext, which is deleted when the statement is
DEALLOCATE'd. When actually running the prepared statement
through the executor, CurrentMemoryContext is used. Let me know
if there's a better way to do this.
This patch is based on Karel Zak's qCache patch for 7.0, but it's
completely new code (it's also a lot simpler, and doesn't bother
with caching plans in shared memory, as discussed on -hackers).
Cheers,
Neil
--
Neil Conway <neilconway(at)rogers(dot)com>
PGP Key ID: DB3C29FC
Attachment | Content-Type | Size |
---|---|---|
prep_stmt.patch | text/plain | 41.0 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-06-28 18:57:27 | Re: mistake in sql99 compatibility? |
Previous Message | Scott Marlowe | 2002-06-28 17:03:31 | Re: Can't read archives anymore :-( |
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-06-28 19:07:14 | Re: psql: fix memory leak |
Previous Message | Christopher Kings-Lynne | 2002-06-28 03:33:17 | Re: [HACKERS] Non-standard feature request |