From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | papapep <papapep(at)gmx(dot)net> |
Cc: | pgsql-novice <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: [personal] Re: Filtering duplicated row with a trigger |
Date: | 2003-10-06 17:40:32 |
Message-ID: | 20031006174032.GA28771@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Mon, Oct 06, 2003 at 19:28:29 +0200,
papapep <papapep(at)gmx(dot)net> wrote:
>
> I've got, on the other hand, text files prepared to be inserted in this
> table with the \copy command, but we are not sure (we've found
> duplicated rows several times) that there are not repeated rows.
>
> I'm trying to create a function that controls these duplicated rows to
> keep the table "clean" of them. In fact, I don't mind if the duplicated
> rows are inserted in a "duplicated rows" table (but perhaps it should be
> a good way to detect where they are generated) or if they get "missed in
> action".
And what do want to happen when you run accross a duplicate row?
Do you just want to discard tuples with a duplicate primary key?
If you are discarding duplicates, do you care which of the duplicates
is discarded?
If you want to combine data from the duplicates, do you have a precise
description of what you want to happen?
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2003-10-06 17:42:59 | Re: [personal] Re: Filtering duplicated row with a trigger |
Previous Message | papapep | 2003-10-06 17:39:16 | Re: [personal] Re: Filtering duplicated row with a trigger |