Re: a back up question

From: Carl Karsten <carl(at)personnelware(dot)com>
To: Martin Mueller <martinmueller(at)northwestern(dot)edu>
Cc: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: a back up question
Date: 2017-12-05 22:06:39
Message-ID: CADmzSSi0jTJ05pEyQPd0Jz=7pczwxGPVwr9dRNBwG7MBLHp=Wg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Nothing wrong with lots of tables and data.

Don't impose any constraints on your problem you don't need to.

Like what are you backing up to? $400 for a 1T ssd or $80 fo a 2T usb3
spinny disk.

If you are backing up while the db is being updated, you need to make sure
updates are queued until the backup is done. don't mess with that
process. personally I would assume the db is always being updated and
expect that.

On Tue, Dec 5, 2017 at 3:52 PM, Martin Mueller <
martinmueller(at)northwestern(dot)edu> wrote:

> Are there rules for thumb for deciding when you can dump a whole database
> and when you’d be better off dumping groups of tables? I have a database
> that has around 100 tables, some of them quite large, and right now the
> data directory is well over 100GB. My hunch is that I should divide and
> conquer, but I don’t have a clear sense of what counts as “too big” these
> days. Nor do I have a clear sense of whether the constraints have to do
> with overall size, the number of tables, or machine memory (my machine has
> 32GB of memory).
>
>
>
> Is 10GB a good practical limit to keep in mind?
>
>
>
>
>

--
Carl K

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Martin Mueller 2017-12-05 22:09:32 Re: a back up question
Previous Message David G. Johnston 2017-12-05 21:59:29 Re: a back up question