From: | "Brendan Jurd" <direvus(at)gmail(dot)com> |
---|---|
To: | "Chris Browne" <cbbrowne(at)acm(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Request for replication advice |
Date: | 2006-11-10 21:56:21 |
Message-ID: | 37ed240d0611101356p619289b3r239ccf5a27fca349@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/11/06, Chris Browne <cbbrowne(at)acm(dot)org> wrote:
> Let me point out one possible downside to using Slony-I log shipping;
> it may not be an issue for you, but it's worth observing...
>
> Log shipping works via serializing the subscription work done on a
> subscriber to files. Thus, you MUST have at least one subscriber in
> order to have log shipping work. If that's a problem, that's a
> problem...
So I would have a normal Slony subscriber sitting somewhere on the
internal network, which pushes its log files out to the remote server.
And the remote server then has a process sitting on it which inhales
the log files into the database as they arrive.
Have I got the right idea?
Why *does* Slony require a bi-directional connection to the
subscriber? The data is travelling in one direction only ... what
needs to come back the other way?
This seems to be getting rather messy. I wonder if I might not be
better off just writing AFTER triggers on all the tables I'm
interested in, which replicate the query to the slave system with
psql. It would probably be relatively labour intensive, and increase
the burden of administering the schema, but it would also be a much
more direct and simple approach.
BJ
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Terenzio | 2006-11-10 22:17:12 | wildcard alias |
Previous Message | Chris Browne | 2006-11-10 21:03:12 | Re: Request for replication advice |