From: | Bill Moran <wmoran(at)collaborativefusion(dot)com> |
---|---|
To: | "antiochus antiochus" <antiochus(dot)usa(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: deadlock debug methodology question |
Date: | 2008-05-23 10:33:26 |
Message-ID: | 20080523063326.e5da1014.wmoran@collaborativefusion.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
In response to "antiochus antiochus" <antiochus(dot)usa(at)gmail(dot)com>:
> On Thu, May 22, 2008 at 4:20 PM, Bill Moran <wmoran(at)collaborativefusion(dot)com>
> wrote:
> >
> > In response to "antiochus antiochus" <antiochus(dot)usa(at)gmail(dot)com>:
> > >
> > > On Thu, May 22, 2008 at 2:57 PM, Bill Moran <
> > wmoran(at)collaborativefusion(dot)com>
> > > wrote:
> > >
> > > > In response to "antiochus antiochus" <antiochus(dot)usa(at)gmail(dot)com>:
> > > > >
> > > > > I have a deadlock situation, two transactions waiting on each other
> > to
> > > > > complete. Based on the details below, would anyone have
> > recommendations
> > > > for
> > > > > me, please?
> > > >
> > > > I have a theory on deadlocks, and that theory is that it's damn near
> > > > impossible to track them all down, so your best bet is to wrap all
> > > > SQL calls in a function that detects deadlock and sleep/retries.
[snip]
> For the past fews days, I've been making the application work using your
> second suggestion (what I meant by "some form of explicit locking"). It does
> works.
>
> However, it is the first explicit lock used in this growing application and
> I was trying to not have any. The rows of this table are, from a business
> logic perspective, partitioned into distinct sets (think one set per
> customer) and a transaction would never involve rows across these sets. So
> in the original design, concurrency is great across customer requests.
> By using the table-level lock, writing transactions targeting these
> non-overlapping per-customer sets end up waiting on each other...
>
> If that is the best we can do, that is it. However, I do not understand why
> acquiring row-level locks in consistent order does not seem to be enough, or
> if that is so, why my methodology to enforce this ordering is flawed. Note
> that I also use foreign keys and I am aware of the fact that constraints are
> verified (and lock acquired) after row insertion and this is taken into
> account as well. I could have a "per-customer" lock, to improve concurrency
> across customers while avoiding deadlocks, or use seralizable transactions,
> but I was wondering if a methodology to accomplish what I was originally
> trying to do has been documented anywhere.
I seem to remember us determining that you were _not_ acquiring row-level
locks in a consistent order. Did I miss something? AFAIK, if you can do
so in a consistent order, you will never hit deadlocks.
Also, I stand by my original statement that deadlocks are nearly impossible
to 100% avoid and they're easy to recover from, so it makes sense to
program your application to detect them and retry appropriately.
--
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/
wmoran(at)collaborativefusion(dot)com
Phone: 412-422-3463x4023
From | Date | Subject | |
---|---|---|---|
Next Message | William Temperley | 2008-05-23 10:47:13 | Re: statistics collector process is thrashing my cpu |
Previous Message | Konsta Tiihonen | 2008-05-23 10:29:42 | Re: F-Secure and PostgreSQL |