From: | "Jim Buttafuoco" <jim(at)buttafuoco(dot)net> |
---|---|
To: | Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Peter Eisentraut <peter_e(at)gmx(dot)net>, Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jim Buttafuoco <jim(at)buttafuoco(dot)net>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Bulkloading using COPY - ignore duplicates? |
Date: | 2001-12-16 14:12:14 |
Message-ID: | 200112161412.fBGECER20364@dual.buttafuoco.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I agree with Lee, I also like Oracle's options for a discard file, so
you can look at what was rejected, fix your problem and reload if
necessary just the rejects.
Jim
> Peter Eisentraut writes:
> > I think allowing this feature would open up a world of new
> > dangerous ideas, such as ignoring check contraints or foreign keys
> > or magically massaging other tables so that the foreign keys are
> > satisfied, or ignoring default values, or whatever. The next step
> > would then be allowing the same optimizations in INSERT. I feel
> > COPY should load the data and that's it. If you don't like the
> > data you have then you have to fix it first.
>
> I agree that PostgreSQL's checks during COPY are a bonus and I
> wouldn't dream of not having them. Many database systems provide a
> fast bulkload by ignoring these constraits and cross references -
> that's a tricky/horrid situation.
>
> However I suppose the question is should such 'invalid data' abort the
> transaction, it seems a bit drastic...
>
> I suppose i'm not really after a IGNORE DUPLICATES option, but rather
> a CONTINUE ON ERROR kind of thing.
>
> Regards, Lee.
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | mlw | 2001-12-16 14:35:58 | Explicit config patch 7.2B4 |
Previous Message | Brent Verner | 2001-12-16 11:30:21 | Re: system catalog relation of a table and a serial sequence |