From: | "Dave Page" <dpage(at)vale-housing(dot)co(dot)uk> |
---|---|
To: | <elein(at)varlena(dot)com>, <dpage(at)vale-housing(dot)co(dot)uk> |
Cc: | <pgadmin(at)pse-consulting(dot)de>, <pgadmin-hackers(at)postgresql(dot)org>, <slony1-general(at)gborg(dot)postgresql(dot)org> |
Subject: | Re: [Slony1-general] RE: Ready for beta yet? |
Date: | 2005-10-03 18:56:23 |
Message-ID: | 024e01c5c84c$2678d522$6a01a8c0@valehousing.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgadmin-hackers |
-----Original Message-----
From: "elein"<elein(at)varlena(dot)com>
Sent: 03/10/05 19:18:55
To: "Dave Page"<dpage(at)vale-housing(dot)co(dot)uk>
Cc: "Andreas Pflug"<pgadmin(at)pse-consulting(dot)de>, "pgadmin-hackers(at)postgresql(dot)org"<pgadmin-hackers(at)postgresql(dot)org>, "Slony Mailing List"<slony1-general(at)gborg(dot)postgresql(dot)org>
Subject: Re: [Slony1-general] RE: [pgadmin-hackers] Ready for beta yet?
> The other reason for not using an origin is that there may be more than
> one table set in the cluster originating on different nodes. No?
Yes, but I don't think that's a problem as (in pgAdmin's case at least) we can just connect to the origin of each individual set, regardless of the topology.
Regards, Dave
-----Unmodified Original Message-----
On Mon, Oct 03, 2005 at 12:39:27PM +0100, Dave Page wrote:
>
>
> > -----Original Message-----
> > From: Andreas Pflug [mailto:pgadmin(at)pse-consulting(dot)de]
> > Sent: 03 October 2005 11:50
> > To: Dave Page
> > Cc: pgadmin-hackers(at)postgresql(dot)org; Slony Mailing List;
> > Christopher Kings-Lynne
> > Subject: Re: [pgadmin-hackers] Ready for beta yet?
> >
> > Dave Page wrote:
> >
> > >>We already discussed to include the creation scripts into pgadmin
> > >>installations to make administrator's life easier, but
> > apparently we
> > >>*must* do that to have pgadmin working on slony 1.1.
> > >
> > >
> > > No, that is a very *bad* idea.
> >
> > At least my proposal provoked an answer finally...
>
> I didn't comment previously because I don't understand the issues fully
> enough to provide any useful input. I do know that forking parts of
> Slony other than is probably not a good idea.
>
> > > The Slony team have told us that they
> > > don't thing the proposed changes are appropriate,
> >
> > I wrote a lengthy mail about this timezone stuff. The issue
> > is fixed in
> > slonik, but not in the db. Don't they want other tools to run? Why do
> > they insist on relying on server's display formatting settings, if
> > things can easily be coded to avoid that?
>
> I don't know. Perhaps Jan or Christopher can explain so we can
> understand their line of thinking?
>
> > > so all that will end
> > > up happening is that any users running pgAdmin will get
> > told to come and
> > > see us for support, or stop running a non-standard version of Slony.
> >
> >
> > I did hope to get feedback what exactly they think happens to
> > non-enabled nodes, or finally what the no_active flag is good
> > for if a
> > node with the flag set to false is assumed to screw up everything.
> >
> > Imagine a standard node (freshly created with slonik, no_active=true)
> > that has no path information to other nodes. Will it
> > interfere? If not,
> > will the existence (not the creation) of path information interfere?
> > AFAICS the interference starts when creating listens, not earlier.
>
> I really don't know the answer to that - again, hopefully Jan or
> Christopher can help.
>
> > > do not want us to have to deal with any Slony support
> > beyond our own
> > > code in pgAdmin
> >
> > We'll be dealing with it anyway. Several tasks are
> > implemented deep down
> > in slonik, not really documented to be executed in a different tool.
>
> Yes, but I don't want us to have to take on random problems that might
> be caused by difference in our versions of the scripts. At the moment at
> least, pgAdmin is working behind the documented Slony API. If we change
> that in any way the Slony team would have every right to tell us and any
> users using our version of things to go an sort out *any* problems for
> ourselves.
>
> > > What exactly do we need the admin nodes for? I'm assuming
> > that you're
> > > setting up the maintenance DB for the server as the admin
> > node so we can
> > > easily find all sets on that server - is that right?
> >
> > It's not needed to locate stuff in the current db's cluster, but to
> > perform actions that have to be executed on the local db as
> > well as the
> > remote server.
>
> Why are such actions not performed on the origin node as a matter of
> course (see below before answering that :-) )?
>
> > > Perhaps for older
> > > versions (well 1.1) we need to just find sets as we connect to any
> > > individual databases, and use your original plan for 1.2 in
> > which we use
> > > a separate table as Jan/Chris have suggested?
> >
> > While I'm not particularly fond of doing things in 1.2+
> > different from
> > 1.0-1.1, this is certainly not a real problem.
> > What *is* a problem is scanning all servers and databases to find a
> > suitable connection, this won't work for several reasons:
> > - we might hit a cluster with the same name, which is actually a
> > different one.
> > - This may take a very long time. I have several remote servers
> > registered, which aren't accessible until I connect via VPN. It would
> > take some minutes until all timeouts elapsed.
>
> I'm not suggesting a full search at connect, just that we populate the
> trees etc only when we do actually connect to an individual database.
> That should be a relatively trivial query on pg_class/pg_namespace I
> would think?
>
> > > Also, what has Chris K-L done in phpPgAdmin? I thought he
> > was going to
> > > use the admin nodes idea as well, but perhaps not...
> >
> > He might not have got to the point where he needs multiple
> > connections.
> > Joining a cluster needs it, but this is trivial because the
> > server/db/cluster-to-join is manually selected. Failover does
> > require it
> > (failednode to be called on all nodes).
> >
> > I'd consider it highly error prone if the connect information
> > entry is
> > deferred until failover really has to happen. Failover is a
> > last-resort
> > action, after everything else failed. In that situation human errors
> > tend to happen, and they usually have far more dramatic consequences
> > (Murphy...).
> >
> > The second use-case is monitoring. When working on a cluster, it's
> > desirable to work cluster-centric, i.e. do everything from a single
> > point, instead of jumping from one database to another to locate the
> > provider node or to find out what's happening on a particular node.
> >
> > While omitting the second point is simply (a lot) less user friendly,
> > failover support can't be reliably implemented without persistent
> > connection info.
>
> Which is why you can't use the origin I guess - because you are unlikely
> to be able to access the origin when you need to failover, but need to
> be sure that pgAdmin knows about the most recent configuration before
> doing anything potentially dangerous. Hmmm... Think I see what you mean
> now (at last)!!
The other reason for not using an origin is that there may be more than
one table set in the cluster originating on different nodes. No?
--elein
>
> /D
> _______________________________________________
> Slony1-general mailing list
> Slony1-general(at)gborg(dot)postgresql(dot)org
> http://gborg.postgresql.org/mailman/listinfo/slony1-general
>
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Pflug | 2005-10-04 07:18:33 | Re: Ready for beta yet? |
Previous Message | Jeff Frost | 2005-10-03 18:25:25 | Replaying archived wal files after a dump restore? |