From: | Gavin Hamill <gdh(at)laterooms(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Load testing across 2 machines |
Date: | 2006-04-08 14:10:28 |
Message-ID: | 20060408151028.1f1e7e18.gdh@laterooms.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I'm asking here in case this kind of thing has been done before, but
I've not been able to find it..
We have two pg 8.1.3 servers, one live and one test. What I'd like to do
is have something like pgpool to act as a connection broker, but
instead of using pgpool's own replication where all queries are sent to
both servers, and SELECTs are split between both servers, I'm aiming
for this scenario:
UPDATE/DELETE/INSERT go only to live, - Slony is replicating live
to test. This permits test to go offline if necessary and easily 'catch
up' later - much more convenient than pgpool's suggestion of 'stop both
servers, then rsync the db files from master to slave'.
SELECTS go to *both* live and test, but only the answers from live are
sent back to clients - the answers from test are discarded...
This would very gracefully allow the test machine to be monitored with
real workload but without any danger of affecting the performance of
the live system / returning bad data..
Has this been done already? Can it be done by extending pgpool or
otherwise without requiring C coding skills? :)
Cheers,
Gavin.
From | Date | Subject | |
---|---|---|---|
Next Message | Leonel Nunez | 2006-04-08 14:28:49 | Re: PostgreSQL (file based) database restore |
Previous Message | Philipp Ott | 2006-04-08 12:04:28 | Re: Postgres Library natively available for Mac OSX Intel? |