| From: | "T(dot) E(dot) Lawrence" <t(dot)e(dot)lawrence(at)icloud(dot)com> |
|---|---|
| To: | Adrian Klaver <adrian(dot)klaver(at)gmail(dot)com> |
| Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: 9.2 streaming replication issue and solution strategy |
| Date: | 2012-11-17 15:33:40 |
| Message-ID: | 3C0D8A80-DFF4-48A3-A10A-8ABC1899BD27@icloud.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> Have you looked at the below?:
>
> http://www.postgresql.org/docs/9.2/interactive/hot-standby.html#HOT-STANDBY-CONFLICT
>
> 25.5.2. Handling Query Conflicts
Yes, thank you!
I am hoping to hear more from people who have running 9.2 systems w/ between 100m and 1b records, w/ streaming replication and heavy data mining on the slaves (5-50m records read per hour by multiple parallel processes), while from time to time (2-3 times/week) between 20 and 50m records are inserted/updated within 24 hours.
How do they resolve this situation.
For us retry + switch slave works quite well right now, without touching the db configuration in this respect yet.
But may be there are different approaches.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Mike Jarmy | 2012-11-17 16:01:41 | Using a GIN index on an integer array to model sets of tags |
| Previous Message | Tom Lane | 2012-11-17 15:17:51 | Re: Parser - Query Analyser |