Re: a back up question

From: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
To: Martin Mueller <martinmueller(at)northwestern(dot)edu>
Cc: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: a back up question
Date: 2017-12-05 21:59:29
Message-ID: CAKFQuwaQkF6k2mi0DDSqn6aXuU-+gZQAMQwsoEzYehmApceLgg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Dec 5, 2017 at 2:52 PM, Martin Mueller <
martinmueller(at)northwestern(dot)edu> wrote:

> Are there rules for thumb for deciding when you can dump a whole database
> and when you’d be better off dumping groups of tables? I have a database
> that has around 100 tables, some of them quite large, and right now the
> data directory is well over 100GB. My hunch is that I should divide and
> conquer, but I don’t have a clear sense of what counts as “too big” these
> days. Nor do I have a clear sense of whether the constraints have to do
> with overall size, the number of tables, or machine memory (my machine has
> 32GB of memory).
>
>
>
> Is 10GB a good practical limit to keep in mind?
>
>
>
​I'd say the rule-of-thumb is if you have to "divide-and-conquer" you
should use non-pg_dump based backup solutions. Too big is usually measured
in units of time, not memory.​

Any ability to partition your backups into discrete chunks is going to be
very specific to your personal setup. Restoring such a monster without
constraint violations is something I'd be VERY worried about.

David J.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Carl Karsten 2017-12-05 22:06:39 Re: a back up question
Previous Message Martin Mueller 2017-12-05 21:52:28 a back up question