From: | Karel Zak <zakkr(at)zf(dot)jcu(dot)cz> |
---|---|
To: | Mike Mascari <mascarm(at)mascari(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Hannu Krosing <hannu(at)tm(dot)ee>, Matthias Urlichs <smurf(at)noris(dot)de>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Berkeley DB... |
Date: | 2000-05-29 14:57:04 |
Message-ID: | Pine.LNX.3.96.1000529150613.7470A-100000@ara.zf.jcu.cz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> It will be interesting to see the speed differences between the
> 100,000 inserts above and those which have been PREPARE'd using
> Karel Zak's PREPARE patch. Perhaps a generic query cache could be
My test:
postmaster: -F -B 2000
rows: 100,000
table: create table (data text);
data: 37B for eache line
--- all is in one transaction
native insert: 66.522s
prepared insert: 59.431s - 11% faster
IMHO parsing/optimizing is relative easy for a simple INSERT.
The query (plan) cache will probably save time for complicated SELECTs
with functions ...etc. (like query that for parsing need look at to system
tables). For example:
insert into tab values ('some data' || 'somedata' || 'some data');
native insert: 91.787s
prepared insert: 45.077s - 50% faster
(Note: This second test was faster, because I stop X-server and
postgres had more memory :-)
The best way for large and simple data inserting is (forever) COPY, not
exist faster way.
pg's path(s) of query:
native insert: parser -> planner -> executor -> storage
prepared insert: parser (for execute stmt) -> executor -> storage
copy: utils (copy) -> storage
> amongst other things). I'm looking forward to when the 7.1 branch
> occurs... :-)
I too.
Karel
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2000-05-29 15:03:10 | Applying TOAST to CURRENT |
Previous Message | Bruce Momjian | 2000-05-29 14:53:52 | CVS FAQ on web page |