Re: Insert performance

From: "hatman" <joel(dot)winteregg(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Cc: "Richard Huxton" <dev(at)archonet(dot)com>, "Andreas Kostyrka" <andreas(at)kostyrka(dot)org>
Subject: Re: Insert performance
Date: 2007-03-06 15:46:36
Message-ID: 1173195995.037709.232750@n33g2000cwc.googlegroups.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Hi Richard,

>
> >>> No, as said above transactions are made of 100000 inserts...
> >> Hmm - I read that as just meaning "inserted 100000 rows". You might find
> >> that smaller batches provide peak performance.
>
> > Ahh ok ;-) sorry for my bad english... (yeah, i have been testing
> > several transaction size 10000, 20000 and 100000)
>
> Not your bad English, my poor reading :-)
>
>
>
> >>>> If so, you'll be limited by the speed of the disk the WAL is running on.
>
> >>>> That means you have two main options:
> >>>> 1. Have multiple connections inserting simultaneously.
> >>> Yes, you're right. That what i have been testing and what provide the
> >>> best performance ! I saw that postgresql frontend was using a lot of CPU
> >>> and not both of them (i'm using a pentium D, dual core). To the opposit,
> >>> the postmaster process use not much resources. Using several client,
> >>> both CPU are used and i saw an increase of performance (about 18000
> >>> inserts/sec).
>
> >>> So i think my bottle neck is more the CPU speed than the disk speed,
> >>> what do you think ?
> >> Well, I think it's fair to say it's not disk. Let's see - the original
> >> figure was 8000 inserts/sec, which is 0.125ms per insert. That sounds
> >> plausible to me for a round-trip to process a simple command - are you
> >> running the client app on the same machine, or is it over the network?
>
> > I did both test. On the local machine (using UNIX sockets) i can reach
> > 18000 insert/sec with 10 clients and prepared statements. The same test
> > using clients on the remote machine provide me 13000 inserts/sec.
>
> OK, so we know what the overhead for network connections is.
>
> > Now, with multiple client (multi-threaded inserts) my both CPU are quite
> > well used (both arround 90%) so i maybe think that disk speeds are now
> > my bottleneck. What do you think ? or maybe i will need a better CPU ?
>
> >> Two other things to bear in mind:
> >> 1. If you're running 8.2 you can have multiple sets of values in an INSERT
> >>http://www.postgresql.org/docs/8.2/static/sql-insert.html
>
> > Yeah, i'm running the 8.2.3 version ! i didn't know about multiple
> > inserts sets ! Thanks for the tip ;-)
>

> Ah-ha! Give it a go, it's designed for this sort of situation. Not sure
> it'll manage thousands of value clauses, but working up from 10 perhaps.
> I've not tested it for performance, so I'd be interesting in knowing how
> it compares to your other results.

Yeah, as soon as possible i will give it a try ! Thanks for the
feedback ;-)

>
> >> 2. You can do a COPY from libpq - is it really not possible?
>
> > Not really but i have been testing it and inserts are flying (about
> > 100000 inserts/sec) !!
>
> What's the problem with the COPY? Could you COPY into one table then
> insert from that to your target table?

The main problem comes from our "real time" needs. We are getting
information as a data flow from several application and we need to
store them in the DB without buffering them too much...
I have been testing the COPY using several statement (i mean using
copy to add only a few rows to a specific table and then using it on
an other table to add a few rows, etc...) and the perf are as bad as
an insert !
COPY seems to be designed to add many many rows to the same table and
not a few rows to several tables... So that's my main problem.

Regards,

Joël

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Carlos Moreno 2007-03-06 15:55:41 Re: Insert performance
Previous Message Jeff Cole 2007-03-06 15:40:11 Re: