From: | steve k <steven(dot)c(dot)kohler(at)nasa(dot)gov> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: PQputCopyData dont signal error |
Date: | 2014-03-31 14:18:08 |
Message-ID: | 1396275488641-5798002.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
<http://postgresql.1045698.n5.nabble.com/file/n5798002/PG_man_excerpt.png>
These were my results:
<http://postgresql.1045698.n5.nabble.com/file/n5798002/PG_embedded_copy_log_excerpt.png>
I'd advise anyone contemplating using this feature to seriously seriously
seriously test this and examine your logs after each test run before you
move this feature into your baseline. Maybe you'll have better luck than I
did.
For what it's worth I got very good performance results from using INSERT
with multiple values clauses that inserted 1000 records at a time.
For example on one error test (of many) purposefully attempting to insert
alphabetic data into a numeric field yielded explicit, correct information
about the exact line of data causing the error within the 1000 records
attempting to be inserted. With this information in hand it would be
eminently feasible to go back to the baseline and examine any recent source
code updates that might have altered the generation of the data that caused
an error like this.
Hopefully this helps anyone trying to handle large amounts of data quickly
and wondering what a viable solution might be.
Best regards to everyone and thank you all for your time,
Steve K.
--
View this message in context: http://postgresql.1045698.n5.nabble.com/PQputCopyData-dont-signal-error-tp4302340p5798002.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2014-03-31 14:25:20 | Re: MultiXactId error after upgrade to 9.3.4 |
Previous Message | Alvaro Herrera | 2014-03-31 14:02:45 | Re: MultiXactId error after upgrade to 9.3.4 |