From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Glenn Maynard <glennfmaynard(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: performance for high-volume log insertion |
Date: | 2009-04-23 11:11:32 |
Message-ID: | 20090423111132.GN8123@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
* Glenn Maynard (glennfmaynard(at)gmail(dot)com) wrote:
> I'd suggest this be mentioned in the sql-prepare documentation, then,
> because that documentation only discusses using prepared statements to
> eliminate redundant planning costs. (I'm sure it's mentioned in the
> API docs and elsewhere, but if it's a major intended use of PREPARE,
> the PREPARE documentation should make note of it.)
Argh. Perhaps the problem is that it's somewhat 'overloaded'. PG
supports *both* SQL-level PREPARE/EXECUTE commands and the more
traditional (well, in my view anyway...) API/protocol of PQprepare() and
PQexecPrepared(). When using the API/protocol, you don't actually
explicitly call the SQL 'PREPARE blah AS INSERT INTO', you just call
PQprepare() with 'INSERT INTO blah VALUES ($1, $2, $3);' and then call
PQexecPrepared() later.
That's the reason it's not documented in the SQL-level PREPARE docs,
anyway. I'm not against adding some kind of reference there, but it's
not quite the way you think it is..
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | John L. Clark | 2009-04-23 16:09:04 | Re: WHERE condition not being pushed down to union parts |
Previous Message | david | 2009-04-23 04:56:51 | Re: performance for high-volume log insertion |