Re: Fastest way to duplicate a quite large database

From: Adrian Klaver <adrian(dot)klaver(at)aklaver(dot)com>
To: Edson Richter <edsonrichter(at)hotmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: Fastest way to duplicate a quite large database
Date: 2016-04-13 14:59:27
Message-ID: 570E5ECF.4070102@aklaver.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 04/13/2016 07:46 AM, Edson Richter wrote:
> Em 13/04/2016 11:18, Adrian Klaver escreveu:
>> On 04/13/2016 06:58 AM, Edson Richter wrote:
>>
>>>
>>>
>>> Another trouble I've found: I've used "pg_dump" and "pg_restore" to
>>> create the new CustomerTest database in my cluster. Immediately,
>>> replication started to replicate the 60Gb data into slave, causing big
>>> trouble.
>>> Does mark it as "template" avoids replication of that "copied" database?
>>> How can I mark a database to "do not replicate"?
>>
>> With the Postgres built in binary replication you can't, it replicates
>> the entire cluster. There are third party solutions that offer that
>> choice:
>>
>> http://www.postgresql.org/docs/9.5/interactive/different-replication-solutions.html
>>
>>
>> Table 25-1. High Availability, Load Balancing, and Replication Feature
>> Matrix
>
> Thanks, I'll look at that.
>
>> It has been mentioned before, running a non-production database on the
>> same cluster as the production database is a generally not a good
>> idea. Per previous suggestions I would host your CustomerTest database
>> on another instance/cluster of Postgres listening on a different port.
>> Then all you customers have to do is create a connection that points
>> at the new port.
>
> Thanks for the concern.
> This "CustomerTest" database is a staging, for customer approval before
> upgrading the production system.
> I bet the users will only open the system, and say it is ok. As crowded
> as people are those days, I doubt they will validate something that is

Not necessarily a bet I would count on:)

> already validated by our development team.
> But our contractor requires, and we provide.
> Since we have "express devivery of new versions" (almost 2 per week), we
> would like to automate the staging environment.

My guess is setting up a different cluster and using pg_basebackup(as
John suggested) will be a lot easier to automate then creating a
differential replication setup. Any way, I have beat this drum long enough.

>
> Thanks,
>
> Edson
>
>>
>>>
>>> Thanks,
>>>
>>> Edson
>>>
>>>
>>
>>
>
>
>

--
Adrian Klaver
adrian(dot)klaver(at)aklaver(dot)com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message CS DBA 2016-04-13 15:00:46 Re: Fastest way to duplicate a quite large database
Previous Message Edson Richter 2016-04-13 14:46:47 Re: Fastest way to duplicate a quite large database