From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Andreas Kostyrka <andreas(at)kostyrka(dot)org> |
Cc: | joël Winteregg <joel(dot)winteregg(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Insert performance |
Date: | 2007-03-06 12:24:58 |
Message-ID: | 45ED5D9A.2010702@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Andreas Kostyrka wrote:
> * Richard Huxton <dev(at)archonet(dot)com> [070306 12:22]:
>>>> 2. You can do a COPY from libpq - is it really not possible?
>>>>
>>> Not really but i have been testing it and inserts are flying (about
>>> 100000 inserts/sec) !!
>> What's the problem with the COPY? Could you COPY into one table then insert from that to your target table?
> Well, there are some issues. First your client needs to support it.
> E.g. psycopg2 supports only some specific CSV formatting in it's
> methods. (plus I had sometimes random psycopg2 crashes, but guarding against
> these is cheap compared to the speedup from COPY versus INSERT)
> Plus you need to be sure that your data will apply cleanly (which in
> my app was not the case), or you need to code a fallback that
> localizes the row that doesn't work.
>
> And the worst thing is, that it ignores RULES on the tables, which
> sucks if you use them ;) (e.g. table partitioning).
Ah, but two things deal with these issues:
1. Joel is using libpq
2. COPY into a holding table, tidy data and INSERT ... SELECT
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Kostyrka | 2007-03-06 12:49:37 | Re: Insert performance |
Previous Message | Andreas Kostyrka | 2007-03-06 12:23:45 | Re: Insert performance |