| From: | Neil Conway <neilc(at)samurai(dot)com> |
|---|---|
| To: | Dawid Kuroczko <qnex42(at)gmail(dot)com> |
| Cc: | pgsql-general(at)postgresql(dot)org |
| Subject: | Re: psql vs perl prepared inserts |
| Date: | 2005-04-13 12:04:41 |
| Message-ID: | 425D0AD9.80401@samurai.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Dawid Kuroczko wrote:
> For a test you might want to try also this approach (both from perl and
> from psql):
>
> $dbh->do('PREPARE sth_tim (int,inet,boolean,timestamptz) AS INSERT
> INTO timestamps VALUES ($1,$2,$3,$4)');
> $sth_tim = $dbh->prepare("EXECUTE sth_tim(?,?,?,?)");
>
> ...and later execute it. (and likewise with psql). If you'll see gain in speed
> with perl it means your DBD::Pg wasn't using server side prepared
> statements.
The intent of prepared statements is to reduce the overhead of running
the parser, rewriter and planner multiple times for a statement that is
executed multiple times. For an INSERT query without any sub-selects
that is not rewritten by any rules, the cost to parse, rewrite and plan
the statement is trivial. So I wouldn't expect prepared statements to be
a big win -- you would gain a lot more from batching multiple inserts
into a single transaction, and more still from using COPY.
-Neil
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Matt Van Mater | 2005-04-13 13:57:09 | Re: psql vs perl prepared inserts |
| Previous Message | Matthias Loitsch | 2005-04-13 11:54:05 | Foreign Keys Question |