From: | Sean Davis <sdavis2(at)mail(dot)nih(dot)gov> |
---|---|
To: | Verena Ruff <lists(at)triosolutions(dot)at> |
Cc: | <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: error handling |
Date: | 2006-05-10 15:49:46 |
Message-ID: | C08783DA.B475%sdavis2@mail.nih.gov |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On 5/10/06 9:09 AM, "Verena Ruff" <lists(at)triosolutions(dot)at> wrote:
> Hi,
>
> Sean Davis schrieb:
>> Probably the simplest way to do this is to load the data into a temporary
>> table without the unique constraint then use SQL to insert a "clean" version
>> into the new table. Alternatively, you could use savepoints. If an insert
>> fails, just rollback to that savepoint. If it succeeds, commit that
>> savepoint.
>>
> But isn't that more overhead than using the trigger? And while inserting
> into the clean table, I would have to test if the value is existing or
> not.
If you are copying bulk data into the table only once, then cleaning the
data up front will not impact your actual use down the road. If you are
saying that you will be inserting non-unique values and need to catch that,
a trigger is the better way to go. As for testing if the value is existing
or not, you can avoid that by using SQL functions (like the postgresql
specific distinct on) to select from the temporary table only those values
that are unique. See here in the docs:
http://www.postgresql.org/docs/8.1/interactive/sql-select.html#SQL-DISTINCT
> I don't see the advantage of this compared to the script which was
> posted a few weeks ago.
I should have pointed out that the solution depends on your needs. If you
don't see an advantage, it is likely because there isn't one for your
particular needs, so feel free to use some other option.
Sean
From | Date | Subject | |
---|---|---|---|
Next Message | Sean Davis | 2006-05-10 16:16:12 | Re: error handling |
Previous Message | Daniel T. Staal | 2006-05-10 15:48:11 | Vacuuming static tables. |