From: | Gordan Bobic <gordan(at)bobich(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Replication Using Triggers |
Date: | 2008-01-18 21:53:23 |
Message-ID: | 47911FD3.4090704@bobich.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Andreas 'ads' Scherbaum wrote:
>>>>>> Have a plperl function that creates connections to all servers in the
>>>>>> cluster (replication partners), and issues the supplied write query to
>>>>>> them, possibly with a tag of some sort to indicated it is a replicated
>>>>>> query (to prevent circular replication).
>>>>>>
>>>>>> Have a post execute trigger that calls the above replication function if
>>>>>> the query was issued directly (as opposed to replicated), and passes it
>>>>>> the query it just executed if it was successful.
>>> Not sure here if you mean literally the SQL query that was executed - in
>>> which case you have all sorts of problems with sequences and functions
>>> returning different values.
>> Indeed, but sequences at least can be worked around. Post-execute,
>> sequence number used should be available already, and the sequence offset
>> and increment can be set so as to ensure they do not clash. That's what
>> MySQL does (and I must apologize for making the comparison all the time).
>
> Sequences are only one (small) problem. What about functions returning
> different results (volatile) for each call? Just imagine random() or
> now().
Yes, that's a problem. The bodge workaround for that is to save the
master's state for such functions and re-pack it from a function into a
literal in a pre-execution trigger, and then replicate the literals.
> What about inserts or updates selecting parts of table data? You
> can't be sure to get exactly the same results on the slave.
You can if you have an ordering consistency check mechanism, as I
mentioned in the other mail. Recovery when "something goes wrong" (tm),
however, could get interesting, especially under heavy distributed write
load. If there's a counter and a hash, I guess you could lock
everything, find the one with the biggest counter, and release the lock
on everything else until it catches up, then re-lock, then replicate. It
would add a fair bit of latency, though.
Gordan
From | Date | Subject | |
---|---|---|---|
Next Message | Yannick Warnier | 2008-01-18 22:01:09 | PHP and Postgres arrays |
Previous Message | Greg Smith | 2008-01-18 21:45:54 | Re: Forgot to dump old data before re-installing machine |