From: | Michael Fuhr <mike(at)fuhr(dot)org> |
---|---|
To: | Joolz <joolz(at)arbodienst-limburg(dot)nl> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: unique problem |
Date: | 2004-11-01 15:54:36 |
Message-ID: | 20041101155436.GA18546@winnie.fuhr.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Nov 01, 2004 at 04:13:43PM +0100, Joolz wrote:
>
> When importing a bunch of data (> 85000 rows) I get an error I can't
> explain. The table into which I'm importing has a unique clause on
> (code, bedrijf). The rows in the source-table are unique in this
> aspect, yet when I do the import I get this "ERROR: duplicate key
> violates unique constraint "werknemer_bedrijf_key".
How are you importing the data? If you use COPY then the error
should show what line is causing the problem, and if you do individual
INSERTs then your import code should be able to recognize the error.
INSERT...SELECT probably won't identify the duplicate record.
> I checked the sourcetable a number of times, even COPYd the relevant
> columns to a textfile and did `uniq -d` and `uniq -D` (nothing
> non-unique found), tried to delete out non-unique rows (again
> nothing found).
Did you sort the file before you ran uniq? Duplicate lines need
to be adjacent for uniq to recognize them.
% cat foo
abc
def
abc
% uniq -d foo
% sort foo | uniq -d
abc
--
Michael Fuhr
http://www.fuhr.org/~mfuhr/
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-11-01 16:00:10 | Re: AT TIME ZONE: "convert"? |
Previous Message | Richard Huxton | 2004-11-01 15:49:36 | Re: spreading the DB? |