Re: Fastest way to duplicate a quite large database

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: Edson Richter <edsonrichter(at)hotmail(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Fastest way to duplicate a quite large database
Date: 2016-04-13 14:18:20
Message-ID: 570E552C.8090908@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 04/13/2016 06:58 AM, Edson Richter wrote:

>
>
> Another trouble I've found: I've used "pg_dump" and "pg_restore" to
> create the new CustomerTest database in my cluster. Immediately,
> replication started to replicate the 60Gb data into slave, causing big
> trouble.
> Does mark it as "template" avoids replication of that "copied" database?
> How can I mark a database to "do not replicate"?

With the Postgres built in binary replication you can't, it replicates
the entire cluster. There are third party solutions that offer that choice:

http://www.postgresql.org/docs/9.5/interactive/different-replication-solutions.html

Table 25-1. High Availability, Load Balancing, and Replication Feature
Matrix

It has been mentioned before, running a non-production database on the
same cluster as the production database is a generally not a good idea.
Per previous suggestions I would host your CustomerTest database on
another instance/cluster of Postgres listening on a different port. Then
all you customers have to do is create a connection that points at the
new port.

>
> Thanks,
>
> Edson
>
>

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Adrian Klaver 2016-04-13 14:26:11 Re: Freezing localtimestamp and other time function on some value
Previous Message Edson Richter 2016-04-13 13:58:17 Re: Fastest way to duplicate a quite large database