From: | Andrew Rawnsley <ronz(at)investoranalytics(dot)com> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Replication |
Date: | 2005-09-14 13:47:02 |
Message-ID: | BF4DA216.79B3%ronz@investoranalytics.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 9/13/05 2:45 PM, "Scott Marlowe" <smarlowe(at)g2switchworks(dot)com> wrote:
> On Tue, 2005-09-13 at 10:45, Russ Brown wrote:
>> Simon Riggs wrote:
>>> Barry,
>>>
>>> You can use PITR to archive transaction logs to a second server that is
>>> kept in standby mode.
>>>
>>> This will cope with any number of tables and cope with dynamic changes
>>> to tables.
>>>
>>> This is fairly straightforward and very low overhead.
>>> Set archive_command to a program that transfers xlog files to second
>>> server.
>>> Then set restore_command on the second server to a program that loops
>>> until the next file is available.
>>> Switchover time is low.
>>>
>>
>> Apologies for going slighly off topic, but isn't this basically how
>> MySQL does replication? I ask because one of the arguments against
>> moving to PostgreSQL in my organisation is 'no replication out of the
>> box'. But if the above is true it seems that all that is required are a
>> couple of scripts to handle log transfer and you have a form of
>> replication out of the box right there.
>>
>> Or am I missing something?
>
> I don't know, but someone in your organization seems to be.
>
> Let me present it as a simple devil's choice, which would you rather
> have, proven replication, that works, but requires you to setup a
> secondary bit of software / system scripts (like rsync) but is tested
> and proven to work, or, an "out of the box" solution that is untested,
> unreliable, and possible unsafe for your data?
>
When I was putting together a fairly complex log-shipping solution in Oracle
(sorry for the O word...), I was presented with that exact choice: use
Oracle's built-in log shipping/recovery mechanism, or design an 'in rsync we
trust' system of scripts. I chose the scripts, and its worked without a burp
for a looong time now. Easy to test, easy to debug, predictable, small
simple parts. Its really not that hard. Keep track of disk space, and make
sure to check the size of the destination file when you move something
around and not just its existence. Not much else to it.
> Chosing a database because it has "out of the box" replication without
> paying attention to how it is implemented, how well it works, and what
> are the ways it can break is a recipe for (data) disaster.
>
We're getting back to the oft-repeated mantra here - replication is hard.
Anyone saying it can be effortless doesn't understand the complexity of the
problem.
> I've tested slony, and I know that for what we use it for, it's a good
> fit and it works well. I've tested MySQL's replication, and it simply
> can't do what I need from a replication system. It can't be setup on
> the fly on a live system with no down time, and it has reliability
> issues that make it a poor choice for a 24/7 enterprise replication
> system.
>
> That said, it's a great system for content management replication, where
> downtime is fine while setting up replication.
>
> But I wouldn't choose either because it was easier to implement. Being
> easy to implement is just sauce on the turkey. I need the meat to be
> good or the sauce doesn't matter.
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: explain analyze is your friend
From | Date | Subject | |
---|---|---|---|
Next Message | CARLADATA, mailing list | 2005-09-14 13:56:04 | Re: index and ilke question |
Previous Message | vinita bansal | 2005-09-14 13:45:43 | Backup and Restore mechanism in Postgres |