From: | Alfred Perlstein <bright(at)wintelcom(dot)net> |
---|---|
To: | Matthew Kirkwood <matthew(at)hairy(dot)beasts(dot)org> |
Cc: | Jules Bean <jules(at)jellybean(dot)co(dot)uk>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Performance on inserts |
Date: | 2000-08-26 11:32:51 |
Message-ID: | 20000826043251.P1209@fw.wintelcom.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Matthew Kirkwood <matthew(at)hairy(dot)beasts(dot)org> [000826 04:22] wrote:
> On Sat, 26 Aug 2000, Jules Bean wrote:
>
> > Is there any simple way for Pg to combine inserts into one bulk?
> > Specifically, their effect on the index files. It has always seemed
> > to me to be one of the (many) glaring flaws in SQL that the INSERT
> > statement only takes one row at a time.
>
> One of MySQL's little syntax abuses allows:
>
> INSERT INTO tab (col1, ..) VALUES (val1, ..), (val2, ..);
>
> which is nice for avoiding database round trips. It's one
> of the reasons that mysql can do a bulk import so quickly.
That would be an _extremely_ useful feature if it made a difference
in postgresql's insert speed.
>
> > But, using INSERT ... SELECT, I can imagine that it might be possible
> > to do 'bulk' index updating. so that scanning process is done once per
> > 'batch'.
>
> Logic for these two cases would be excellent.
We do this sometimes, works pretty nicely.
--
-Alfred Perlstein - [bright(at)wintelcom(dot)net|alfred(at)freebsd(dot)org]
"I have the heart of a child; I keep it in a jar on my desk."
From | Date | Subject | |
---|---|---|---|
Next Message | Oliver Teuber | 2000-08-26 11:52:01 | Re: Performance on inserts |
Previous Message | Matthew Kirkwood | 2000-08-26 11:14:06 | Re: Performance on inserts |