From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Magnus Hagander <magnus(at)hagander(dot)net> |
Cc: | Thom Brown <thom(at)linux(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Protecting against multiple instances per cluster |
Date: | 2011-09-09 22:48:17 |
Message-ID: | CAM-w4HNG-fV6aDO6wC9h9vLTjV-UZdohrLOMpfSVOSTGnxO8+g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Sep 8, 2011 at 10:03 PM, Magnus Hagander <magnus(at)hagander(dot)net> wrote:
>> Would there be a way to prevent this abhorrent scenario from coming
>> into existence?
> There are plenty of clustering products out there that are really
> designed for one thing pimarily, and that's dealing with this kind of
> fencing.
Wouldn't those products exist to *allow* you to set up an environment
like this safely?
I think what Thom is saying is it would be nice if we could notice the
situation looks bad and *stop* the user from doing this at all.
We could do that easily if we were willing to trade off some
convenience for users who don't have shared storage by just removing
the code for determining if there's a stale lock file.
Also if the shared filesystem happened to have a working locking
server and we use the right file locking api then we would be able to
notice an apparently stale lock file that is nonetheless locked by
another postgres instance. There was some talk about using one of the
locking apis a while back.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2011-09-09 22:54:18 | Re: Large C files |
Previous Message | Daniel Farina | 2011-09-09 22:02:53 | Re: Should I implement DROP INDEX CONCURRENTLY? |