From: | Michael A Nachbaur <mike(at)nachbaur(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Replication for a large database |
Date: | 2003-05-05 16:52:33 |
Message-ID: | 200305050952.33688.mike@nachbaur.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Hello all,
I apologize if this has already been covered in the past, but I couldn't seem
to find an adequate solution to my problem in the archives.
I have a database that is used for a bandwidth tracking system at a broadband
ISP. To make a long story short, I'm inserting over 800,000 records per day
into this database. Suffice to say, the uptime of this database is of
paramount importance, so I would like to have a more up-to-date backup copy
of my database in the event of a failure (more recent than my twice-per-day
db_dump backup).
I have two servers, both Dual Xeon-2G with 4G of RAM, and would like to
replicate between the two. I would like to have "live" replication, but I
couldn't seem to find a solution for that for PostgreSQL. I tried RServ but,
after attempting it, I saw a mailing list posting saying that it is
more-or-less useless for databases that have a large number of inserts (like
mine).
When I perform a replication after a batch of data is inserted, the query runs
literally for hours before it returns. I have never actually been present
during the whole replication duration since it takes longer than my 8-12 hour
days here at work.
Is there any replication solution that would fit my needs? I'm taking
advantage of some PG7.2 features so "downgrading" to the 6.x version of
postgres that has replication support isn't an option.
Thanks.
--man
From | Date | Subject | |
---|---|---|---|
Next Message | Achilleus Mantzios | 2003-05-05 18:38:43 | Re: [SQL] Indices are not used by the optimizer |
Previous Message | Ian Barwick | 2003-05-05 16:48:45 | Re: UNICODE and SQL |