From: | Ottavio Campana <ottavio(at)campana(dot)vi(dot)it> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | concurrency in stored procedures |
Date: | 2007-03-23 18:40:58 |
Message-ID: | 46041F3A.2030707@campana.vi.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
using constraints on tables I was able to remove some race conditions,
because the unique index prevents the same data to be inserted twice
into the table.
But I still didn't fix all the race conditions, because in some
functions I have to modify more than one table or I just have read and
write data in the same table. So, what is the best way to handle
concurrency in stored procedures?
I read that using locks isn't good because it may lead to deadlocks, so
I was thinking about transactions, but I wan't able to find a good example.
What would you to in order to be sure that one function or a part of it
is atomically executed?
I also read that postgresql is able to detect deadlocks and can try to
solve them. How does this happen in a stored procedure and how can a
procedure know that it was aborted because of the deadlock?
Thank you
--
Non c'e' piu' forza nella normalita', c'e' solo monotonia.
From | Date | Subject | |
---|---|---|---|
Next Message | Mark | 2007-03-23 19:02:46 | Re: question: knopixx and postgresql on flash drive |
Previous Message | Martijn van Oosterhout | 2007-03-23 17:58:55 | Re: Async triggers |