From: | Joe Uhl <joeuhl(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: Nonexistent pid in pg_locks |
Date: | 2009-07-08 18:56:29 |
Message-ID: | 2E0DB33E-5553-4469-A30A-B2F1EB65B233@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Jul 8, 2009, at 2:41 PM, Tom Lane wrote:
> Joe Uhl <joeuhl(at)gmail(dot)com> writes:
>> I had to bounce an OpenMQ broker this morning (this database is the
>> DB
>> for an OpenMQ HA setup) and couldn't get it to reconnect to postgres.
>> On inspecting the database I found dozens of vacuum processes waiting
>> (I have a cron job that vacuums each night) and chewing up connection
>> slots. Killing those left a few autovacuum worker process waiting.
>> Killing those left just this one orphaned pid apparently holding a
>> lock. Assumably they were all waiting on the lock "held" by 10453.
>
> What exactly did you do to "kill" those processes? Do you remember
> whether any of them happened to have PID 10453?
I used "kill pid1 pid2 pid3 ..." (no -9) as root. Unfortunately I do
not recall if that pid was one of the processes I killed and not
enough scrollback in this screen to see. It is a
ShareUpdateExclusiveLock lock though and I definitely only killed
vacuum/analyze pids so thinking there is a very high chance of 10453
being one of them.
>
>> Is there any way for me to clear that orphaned entry out of pg_locks?
>
> Restarting the database should take care of this, I think.
>
> regards, tom lane
I've got a block of time scheduled for tonight to restart, will give
that a shot. Thanks for the response,
Joe
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-07-08 19:00:57 | Re: Nonexistent pid in pg_locks |
Previous Message | Tom Lane | 2009-07-08 18:41:27 | Re: Nonexistent pid in pg_locks |