From: | Scott Marlowe <smarlowe(at)g2switchworks(dot)com> |
---|---|
To: | tom(at)tacocat(dot)net |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: DeadLocks... |
Date: | 2007-06-14 14:57:56 |
Message-ID: | 46715774.3060602@g2switchworks.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
tom(at)tacocat(dot)net wrote:
> I found a problem with my application which only occurs under high loads
> (isn't that always the case?).
>
> snippets of perl...
>
> insert into tokens (token)
> select values.token
> from (values TOKEN_LIST_STRING) as values(token)
> left outer join tokens t using (token)
> where t.token_idx is null
>
> $sql =~ s/TOKEN_LIST_STRING/$string/
> where $string is of the form (('one'),('two'))
>
> This works 99% of the time.
>
> But everyone once in a long while it seems that I hit simultaneaous
> execute() statements that deadlock on the insertion.
>
> Right now I know of no other way to handle this than to eval{ } the
> execution and if it fails, sleep random milliseconds and retry... "wash
> rinse repeat" for some number of times.
>
> Is there any better way of doing this or some other means to manage
> DEADLOCK?
Is this a deadlock that postgresql detects and causes one thread to roll
back and you can recover from, or are you talking about a deadlock that
isn't detected by postgresql and locks a thread?
What error messages are you seeing?
Generally speaking, if your operations have a potential for a deadlock,
the best you can do is to do what you're doing now, detect failure and
retry x times, then give up if it won't go through.
Or, redesign the way you're doing things.
From | Date | Subject | |
---|---|---|---|
Next Message | Andrus | 2007-06-14 15:10:44 | Disable duplicate index creation |
Previous Message | Tom Lane | 2007-06-14 14:54:28 | Re: strange change (and error) in 8.3 ? |