From: | Martin Mueller <martinmueller(at)northwestern(dot)edu> |
---|---|
To: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
Cc: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: a back up question |
Date: | 2017-12-05 22:09:32 |
Message-ID: | 6E6ED72C-BCC2-4F67-AEAC-C501DF2CAC58@northwestern.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Time is not really a problem for me, if we talk about hours rather than days. On a roughly comparable machine I’ve made backups of databases less than 10 GB, and it was a matter of minutes. But I know that there are scale problems. Sometimes programs just hang if the data are beyond some size. Is that likely in Postgres if you go from ~ 10 GB to ~100 GB? There isn’t any interdependence among my tables beyond queries I construct on the fly, because I use the database in a single user environment
From: "David G. Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
Date: Tuesday, December 5, 2017 at 3:59 PM
To: Martin Mueller <martinmueller(at)northwestern(dot)edu>
Cc: "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: a back up question
On Tue, Dec 5, 2017 at 2:52 PM, Martin Mueller <martinmueller(at)northwestern(dot)edu<mailto:martinmueller(at)northwestern(dot)edu>> wrote:
Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).
Is 10GB a good practical limit to keep in mind?
I'd say the rule-of-thumb is if you have to "divide-and-conquer" you should use non-pg_dump based backup solutions. Too big is usually measured in units of time, not memory.
Any ability to partition your backups into discrete chunks is going to be very specific to your personal setup. Restoring such a monster without constraint violations is something I'd be VERY worried about.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2017-12-05 22:15:19 | Re: a back up question |
Previous Message | Carl Karsten | 2017-12-05 22:06:39 | Re: a back up question |