From: | Bruno Wolff III <bruno(at)wolff(dot)to> |
---|---|
To: | papapep <papapep(at)gmx(dot)net> |
Cc: | pgsql-novice <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: [personal] Re: Filtering duplicated row with a trigger |
Date: | 2003-10-06 17:33:37 |
Message-ID: | 20031006173337.GA28578@wolff.to |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Mon, Oct 06, 2003 at 18:56:28 +0200,
papapep <papapep(at)gmx(dot)net> wrote:
>
> I'm very sorry, but I think I don't understand completely what you mean.
> Perhaps you suggest to insert the whole data in an initial temporary
> table and verify the duplicates in the temporary table before
> transfering the "good" rows to the real table? If so, how should I do
Yes. That allows you to use SQL to handle the duplicates which is probably
going to be simpler than writing a trigger. You can also use copy to
load the temp table which will be faster than using inserts.
> the duplicates control in the temp table? (for me is as difficult as my
> first question :-( )
> Consider that the primary key that we use to see if the row is
> duplicated, or not, is a 5 fields key (it has to be so, is a complex
> data to filter).
You haven't given us a rule to use when deciding which tuples to remove
when a duplicate is detected. Without such a rule we can't give you
detailed instructions on how to remove the duplicates. Having a 5
column primary key doesn't make the problem significantly more difficult
to solve, it mostly just adds a small amount of typing.
From | Date | Subject | |
---|---|---|---|
Next Message | papapep | 2003-10-06 17:39:16 | Re: [personal] Re: Filtering duplicated row with a trigger |
Previous Message | papapep | 2003-10-06 17:28:29 | Re: [personal] Re: Filtering duplicated row with a trigger |