From: | Shane Ambler <pgsql(at)007Marketing(dot)com> |
---|---|
To: | ilejn(at)yandex(dot)ru |
Cc: | pgsql-general(at)postgresql(dot)org, oleg(at)sai(dot)msu(dot)su, teodor(at)sigaev(dot)ru |
Subject: | Re: COPY FROM STDIN instead of INSERT |
Date: | 2006-10-18 08:43:23 |
Message-ID: | 4535E92B.5020109@007Marketing.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ilja Golshtein wrote:
> Hello!
>
> One important use case in my libpq based application (PostgreSQL 8.1.4) is a sort of massive data loading.
>
> Currently it is implemented as a series of plain normal INSERTs
> (binary form of PQexecParams is used) and the problem here it is pretty slow.
>
> I've tried to play with batches and with peculiar constructions
> like INSERT (SELECT .. UNION ALL SELECT ..) to improve performance, but not satisfied with the result I've got.
>
> Now I try to figure out if it is possible to use COPY FROM STDIN instead of INSERT if I have to insert, say, more then 100 records at once.
>
> Hints are highly appreciated.
>
> The only limitaion mentioned in Manual is about Rules and I don't care about this since I don't use Rules.
> Am I going to come across with any other problems (concurrency, reliability, compatibility, whatever) on this way?
>
> Many thanks.
>
Using COPY FROM STDIN is much faster than INSERT's (I am sure some out
there have test times to compare, I don't have any on hand)
Sounds like your working with an existing database - if you are starting
from scratch (inserting data into an empty database) then there are
other things that can help too.
--
Shane Ambler
Postgres(at)007Marketing(dot)com
Get Sheeky @ http://Sheeky.Biz
From | Date | Subject | |
---|---|---|---|
Next Message | Matthias.Pitzl | 2006-10-18 08:48:37 | Re: Maximum size of database |
Previous Message | louis gonzales | 2006-10-18 08:30:45 | Re: Maximum size of database |