From: | Craig James <craig_james(at)emolecules(dot)com> |
---|---|
To: | Ben <bench(at)silentmedia(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Replication |
Date: | 2007-06-15 00:38:01 |
Message-ID: | 4671DF69.9090504@emolecules.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks to all who replied and filled in the blanks. The problem with the web is you never know if you've missed something.
Joshua D. Drake wrote:
>> Looking for replication solutions, I find...
>> Slony-II
> Dead
Wow, I'm surprised. Is it dead for lack of need, lack of resources, too complex, or all of the above? It sounded like such a promising theoretical foundation.
Ben wrote:
> Which replication problem are you trying to solve?
Most of our data is replicated offline using custom tools tailored to our loading pattern, but we have a small amount of "global" information, such as user signups, system configuration, advertisements, and such, that go into a single small (~5-10 MB) "global database" used by all servers.
We need "nearly-real-time replication," and instant failover. That is, it's far more important for the system to keep working than it is to lose a little data. Transactional integrity is not important. Actual hardware failures are rare, and if a user just happens to sign up, or do "save preferences", at the instant the global-database server goes down, it's not a tragedy. But it's not OK for the entire web site to go down when the one global-database server fails.
Slony-I can keep several slave databases up to date, which is nice. And I think I can combine it with a PGPool instance on each server, with the master as primary and few Slony-copies as secondary. That way, if the master goes down, the PGPool servers all switch to their secondary Slony slaves, and read-only access can continue. If the master crashes, users will be able to do most activities, but new users can't sign up, and existing users can't change their preferences, until either the master server comes back, or one of the slaves is promoted to master.
The problem is, there don't seem to be any "vote a new master" type of tools for Slony-I, and also, if the original master comes back online, it has no way to know that a new master has been elected. So I'd have to write a bunch of SOAP services or something to do all of this.
I would consider PGCluster, but it seems to be a patch to Postgres itself. I'm reluctant to introduce such a major piece of technology into our entire system, when only one tiny part of it needs the replication service.
Thanks,
Craig
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Kostyrka | 2007-06-15 01:02:15 | Re: Replication |
Previous Message | Kevin Grittner | 2007-06-14 23:57:15 | Re: Replication |