From: | Filipe Pina <filipe(dot)pina(at)impactzero(dot)pt> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Postgresql General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: database-level lockdown |
Date: | 2015-06-12 16:25:18 |
Message-ID: | 271401C5-E8DD-4B27-8C27-7FB0DB9617C2@impactzero.pt |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Exactly, that’s why there’s a limit on the retry number. On the last try I wanted something like full lockdown to make sure the transaction will not fail due to serialiazation failure (if no other processes are touching the database, it can’t happen).
So if two transactions were retrying over and over, the first one to reach max_retries would activate that “global lock” making the other one wait and then the second one would also be able to successfully commit...
> On 11/06/2015, at 20:27, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>
> Filipe Pina <filipe(dot)pina(at)impactzero(dot)pt> writes:
>> It will try 5 times to execute each instruction (in case of
>> OperationError) and in the last one it will raise the last error it
>> received, aborting.
>
>> Now my problem is that aborting for the last try (on a restartable
>> error - OperationalError code 40001) is not an option... It simply
>> needs to get through, locking whatever other processes and queries it
>> needs.
>
> I think you need to reconsider your objectives. What if two or more
> transactions are repeatedly failing and retrying, perhaps because they
> conflict? They can't all forcibly win.
>
> regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | sym39 | 2015-06-12 16:43:09 | BDR: Node join and leave questions |
Previous Message | Manuel Kniep | 2015-06-12 16:24:43 | cached row type not invalidated after DDL change |