From: | "Vadim Mikheev" <vmikheev(at)sectorbase(dot)com> |
---|---|
To: | "Bruce Momjian" <pgman(at)candle(dot)pha(dot)pa(dot)us>, "Daniel Kalchev" <daniel(at)digsys(dot)bg> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Lee Kindness" <lkindness(at)csl(dot)co(dot)uk>, "Peter Eisentraut" <peter_e(at)gmx(dot)net>, "Jim Buttafuoco" <jim(at)buttafuoco(dot)net>, "PostgreSQL Development" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Bulkloading using COPY - ignore duplicates? |
Date: | 2002-01-04 07:47:36 |
Message-ID: | 000001c194f4$37c84f50$ed2db841@home |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Now, how about the same functionality for
>
> INSERT into table1 SELECT * from table2 ... WITH ERRORS;
>
> Should allow the insert to complete, even if table1 has unique indexes and
we
> try to insert duplicate rows. Might save LOTS of time in bulkloading
scripts
> not having to do single INSERTs.
1. I prefer Oracle' (and others, I believe) way - put statement(s) in PL
block and define
for what exceptions (errors) what actions should be taken (ie IGNORE for
NON_UNIQ_KEY
error, etc).
2. For INSERT ... SELECT statement one can put DISTINCT in select' target
list.
> Guess all this will be available in 7.3?
We'll see.
Vadim
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2002-01-04 07:48:04 | Re: RC1 time? |
Previous Message | Daniel Kalchev | 2002-01-04 07:36:01 | Re: Bulkloading using COPY - ignore duplicates? |