Re: 9.2 streaming replication issue and solution strategy

From: "Kevin Grittner" <kgrittn(at)mail(dot)com>
To: "T(dot) E(dot) Lawrence" <t(dot)e(dot)lawrence(at)icloud(dot)com>,"Adrian Klaver" <adrian(dot)klaver(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: 9.2 streaming replication issue and solution strategy
Date: 2012-11-20 22:25:50
Message-ID: 20121120222550.156370@gmx.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Adrian Klaver wrote:

> I am hoping to hear more from people who have running 9.2 systems
> w/ between 100m and 1b records, w/ streaming replication and heavy
> data mining on the slaves (5-50m records read per hour by multiple
> parallel processes), while from time to time (2-3 times/week)
> between 20 and 50m records are inserted/updated within 24 hours.

I've run replication on that general scale. IMV, when you are using
PostgreSQL hot standby and streaming replication you need to decide
whether a particular replica is primarily for recovery purposes, in
which case you can't run queries which take very long without getting
canceled for this reason, or primarily for reporting, in which case
long-running queries can finish, but the data in the database may get
relatively stale while they run. If you have multiple replicas, you
probably want to configure them differently in this regard.

-Kevin

Browse pgsql-general by date

  From Date Subject
Next Message Shaun Thomas 2012-11-20 22:26:49 Re: High SYS CPU - need advise
Previous Message Adrian Klaver 2012-11-20 22:23:08 Re: get column name passed to a function