From: | Ivan Pavlov <ivan(dot)pavlov(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Logg errors during UPDATE |
Date: | 2008-12-16 15:44:48 |
Message-ID: | b269add4-12f7-401e-be86-a6510848fe9b@g1g2000pra.googlegroups.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Neiter LOG ERRORS nor REJECT LIMIT are implemented in PostgreSQL,
though I agree they may be useful. Both can be simulated with a custom
stored procedure which loops over a cursor and updates row-by-row,
trapping errors along the way. This will, of course, be slower.
regards,
Ivan Pavlov
On Dec 12, 4:34 am, spam_ea(dot)(dot)(dot)(at)gmx(dot)net (Thomas Kellerer) wrote:
> Hi,
>
> with Oracle I have the ability to tell the system to log errors during a long transaction into a separate table and proceed with the statement. This is quite handy when updating large tables and the update for one out of a million rows fails.
>
> The syntax is something like this:
>
> UPDATE <affecting a lot of rows>
> LOG ERRORS INTO target_log_table;
>
> Any row that can not be updated will logged into the specified table (which needs to have a specific format of course) and the statement continues. You can add a limit on how many errors should be "tolerated".
> This works for INSERT and DELETE as well.
>
> Is there something similar in Postgres? Or a way how I could simulate this?
>
> Cheers
> Thomas
>
> --
> Sent via pgsql-general mailing list (pgsql-gene(dot)(dot)(dot)(at)postgresql(dot)org)
> To make changes to your subscription:http://www.postgresql.org/mailpref/pgsql-general
From | Date | Subject | |
---|---|---|---|
Next Message | Ketema | 2008-12-16 16:26:06 | Re: Trigger/Rules Order of operations |
Previous Message | Angel | 2008-12-16 15:36:04 | Re: tup_returned/ tup_fetched |