Re: Replication for a large database

From: Michael A Nachbaur <mike(at)nachbaur(dot)com>
To: "Ryan" <pgsql-sql(at)seahat(dot)com>, <pgsql-sql(at)postgresql(dot)org>
Subject: Re: Replication for a large database
Date: 2003-05-05 19:26:35
Message-ID: 200305051226.35780.mike@nachbaur.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

I have thought about this. The problem I come into is data consistancy. I
have at least 8 different processes that harvest data, and an intranet
website that can also manipulate the database (to assign customers to
different packages, re-assign modems to different customers, etc). Trying to
maintain consistancy across the entire application would be such a nightmare,
I don't want to think about it.

If I go with a centralized middleware server that manages all database access,
then I could perhaps do that in there...and then I could use transactions on
both databases, and if either transaction fails then I'll roll back the
other. But this would make my entire framework very rigid.

On Monday 05 May 2003 09:16 am, Ryan wrote:
> Ok, mabye this is just because I'm coming from a layman's perspective
> regarding enterprise level databases, but couldn't you fake replication
> by inserting the data into both databases? (granted this involves
> having source access to the program doing the insertion.)
>
> It may be a kludge, but it would work until something better came along.
>
> Ryan
>
> > Hello all,
> >
> > I apologize if this has already been covered in the past, but I
> > couldn't seem to find an adequate solution to my problem in the
> > archives.
> >
> > I have a database that is used for a bandwidth tracking system at a
> > broadband ISP. To make a long story short, I'm inserting over
> > 800,000 records per day into this database. Suffice to say, the
> > uptime of this database is of paramount importance, so I would like
> > to have a more up-to-date backup copy of my database in the event of
> > a failure (more recent than my twice-per-day db_dump backup).
> >
> > I have two servers, both Dual Xeon-2G with 4G of RAM, and would like
> > to replicate between the two. I would like to have "live"
> > replication, but I couldn't seem to find a solution for that for
> > PostgreSQL. I tried RServ but, after attempting it, I saw a mailing
> > list posting saying that it is more-or-less useless for databases
> > that have a large number of inserts (like mine).
> >
> > When I perform a replication after a batch of data is inserted, the
> > query runs literally for hours before it returns. I have never
> > actually been present during the whole replication duration since it
> > takes longer than my 8-12 hour days here at work.
> >
> > Is there any replication solution that would fit my needs? I'm taking
> > advantage of some PG7.2 features so "downgrading" to the 6.x version
> > of postgres that has replication support isn't an option.
> >
> > Thanks.
> >
> > --man
>
> ---------------------------(end of broadcast)---------------------------
> TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org

In response to

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Michael A Nachbaur 2003-05-05 19:28:45 Re: Replication for a large database
Previous Message Achilleus Mantzios 2003-05-05 19:22:12 Re: UNICODE and SQL