Re: Replicating hundreds of thousandw of rows

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Job <Job(at)colliniconsulting(dot)it>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Replicating hundreds of thousandw of rows
Date: 2016-11-25 14:52:30
Message-ID: CANP8+jJo121812Yq3H4DVAnVEbE9+zpZJBM6rwPnn8dx9p8EHw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 25 November 2016 at 06:23, Job <Job(at)colliniconsulting(dot)it> wrote:
> Hello,
>
> we need to replicate hundreds of thousands of rows (for reporting) between Postgresql Database nodes that are in different locations.
>
> Actually, we use Rubyrep with Postgresql 8.4.22.

8.4 is now end-of-life. You should move to the latest version.

> It works fine but it is very slow with a massive numbers of rows.
>
> With Postgresql 9.x, are there some ways to replicate (in background, not in real time!), these quantities of data?
> We need a periodical syncronization..,

You have a choice of

* Physical streaming replication, built-in from 9.0+
* Logical streaming replication, partially built in from 9.4+ using pglogical
and
* Logical streaming replication, built in from 10.0+ (not yet released)

Performance is much better than rubyrep

--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2016-11-25 14:54:48 Re: pg_am access in simple transaction?
Previous Message Job 2016-11-25 14:23:43 Replicating hundreds of thousandw of rows