From: | Sameer Kumar <sameer(dot)kumar(at)ashnik(dot)com> |
---|---|
To: | Jim Garrison <jim(dot)garrison(at)nwea(dot)org> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Remote troubleshooting session connection? |
Date: | 2014-04-09 04:53:46 |
Message-ID: | CADp-Sm5-7rYFqoPj73O+aDNvK3RDfGwuXfBkTC096f7broS3SA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sat, Apr 5, 2014 at 6:41 AM, Jim Garrison <jim(dot)garrison(at)nwea(dot)org> wrote:
> An ETL "job" runs inside its own transaction and consists of a series of
> queries that transform the data from staging tables to the destination
> tables. If a failure occurs, the transaction rolls back so there's no
> "debris" left over -- which makes troubleshooting very difficult.
>
If you are loading huge amount of data then:
1) Committing every 10000 (or so) rows might make sense
2) Have you considered using COPY API in Postgres' JDBC?
3) Which version of PostgreSQL are you using? I guess 9.3 has a freeze
option which might help you. I am not sure if the API supports it.
Best Regards,
*Sameer Kumar | Database Consultant*
*ASHNIK PTE. LTD.*
101 Cecil Street, #11-11 Tong Eng Building, Singapore 069533
M: *+65 8110 0350* T: +65 6438 3504 | www.ashnik.com
*[image: icons]*
[image: Email patch] <http://www.ashnik.com/>
This email may contain confidential, privileged or copyright material and
is solely for the use of the intended recipient(s).
From | Date | Subject | |
---|---|---|---|
Next Message | Alberto Cabello Sánchez | 2014-04-09 06:24:20 | Re: check constraint question |
Previous Message | Tom Lane | 2014-04-09 01:32:16 | Re: is there a way to firmly cap postgres worker memory consumption? |