| From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
|---|---|
| To: | James Mansion <james(at)mansionfamily(dot)plus(dot)com> |
| Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, david(at)lang(dot)hm, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: performance for high-volume log insertion |
| Date: | 2009-04-22 21:17:43 |
| Message-ID: | 1240435063.2119.119.camel@jd-laptop.pragmaticzealot.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On Wed, 2009-04-22 at 21:53 +0100, James Mansion wrote:
> Stephen Frost wrote:
> > You're re-hashing things I've already said. The big win is batching the
> > inserts, however that's done, into fewer transactions. Sure, multi-row
> > inserts could be used to do that, but so could dropping begin/commits in
> > right now which probably takes even less effort.
> >
> Well, I think you are seriously underestimating the cost of the
> round-trip compared
The breakdown is this:
1. Eliminate single inserts
2. Eliminate round trips
Yes round trips are hugely expensive.
>
> > No, as was pointed out previously already, you really just need 2. A
> >
> And I'm disagreeing with that. Single row is a given, but I think
> you'll find it pays to have one
My experience shows that you are correct. Even if you do a single BEGIN;
with 1000 inserts you are still getting a round trip for every insert
until you commit. Based on 20ms round trip time, you are talking
20seconds additional overhead.
Joshua D. Drake
--
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Glenn Maynard | 2009-04-22 21:20:01 | Re: performance for high-volume log insertion |
| Previous Message | Glenn Maynard | 2009-04-22 21:04:43 | Re: performance for high-volume log insertion |