From: | NikhilS <nikkhils(at)gmail(dot)com> |
---|---|
To: | longlong <asfnuts(at)gmail(dot)com> |
Cc: | "Neil Conway" <neilc(at)samurai(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: COPY issue(gsoc project) |
Date: | 2008-03-14 08:26:26 |
Message-ID: | d3c4af540803140126i5c23ba0an7f4f717e9dd2cf84@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Longlong,
> > i think this is a better idea.
> from *NikhilS *
> http://archives.postgresql.org/pgsql-hackers/2007-12/msg00584.php
> But instead of using a per insert or a batch insert substraction, I am
> thinking that we can start off a subtraction and continue it till we
> encounter a failure. The moment an error is encountered, since we have the
> offending (already in heap) tuple around, we can call a simple_heap_delete
> on the same and commit (instead of aborting) this subtransaction after doing
> some minor cleanup. This current input data row can also be logged into a
> bad file. Recall that we need to only handle those errors in which the
> simple_heap_insert is successful, but the index insertion or the after row
> insert trigger causes an error. The rest of the load then can go ahead with
> the start of a new subtransaction.
> the simplest thing are often the best.
> i think it's hard to implement or some other deficiency since you want
> subtransaction or every "n" rows.
>
Yeah simpler things are often the best, but as folks are mentioning, we need
a carefully thought out approach here. The reply from Tom to my posting
there raises issues which need to be taken care of. Although I still think
that if we carry out *sanity* checks before starting the load about presence
of triggers, constrainsts, fkey constraints etc, if others do not have any
issues with the approach, the simple_heap_delete idea should work in some
cases. Although the term I used "after some minor cleanup" might need some
thought too now that I think more of it..
Also if Fkey checks or complex triggers are around, maybe we can fall back
to a subtransaction per row insert too as a worse case measure..
Regards,
Nikhils
--
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Pavan Deolasee | 2008-03-14 09:16:01 | Re: PROC_VACUUM_FOR_WRAPAROUND doesn't work expectedly |
Previous Message | Paul van den Bogaard | 2008-03-14 08:00:04 | Re: Reducing Transaction Start/End Contention |