From: | Thomas Hallgren <thhal(at)mailblocks(dot)com> |
---|---|
To: | Steve Holdoway <steve(at)treshna(dot)com> |
Subject: | Re: transactions within functions |
Date: | 2004-07-02 11:14:39 |
Message-ID: | 40E5439F.3030008@mailblocks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Steve Holdoway wrote:
> Yes, I know it's not possible, but can anyone suggest an alternative for
> this problem?
>
> I've written a very simple trigger-driven replication system, which
> works in stages. First the trigger generates an entry in a log table
> which is a fully formatted sql command... insert into/delete from, etc.
> Secondly, this table is transferred to the receiving database, and
> cleared down. This all works fine.
>
> On the receiving end, there is a cron job that processes all of the
> commands in this table. However, this is written as a plpgsql function,
> so it's 'all or nothing'... ie any errors in the data mean that all
> successful updates preceeding this error are rolled back. This makes
> finding and debugging the data errors extremely difficult, but, more
> importantly, stops the update process cold.
>
> I have tried calling a child function from the parent to perform the
> update in batches, but it still exhibits the same 'all or nothing'
> functionality.
>
> Can anyone suggest a way that I can get around this?
>
The easisest way is probably let your cron job be a small client program
using one of the available interfaces and call a plsql function from
there, once for each batch. Each call followed by a commit. This
approach will give you full control, both with respect to transactions
and logging/debugging.
Kind regards,
Thomas Hallgren
From | Date | Subject | |
---|---|---|---|
Next Message | Prem Gopalan | 2004-07-02 11:42:54 | Re: Postgres Crashes |
Previous Message | Christopher Kings-Lynne | 2004-07-02 09:34:33 | Re: compile errors in new PL/Pler |