| From: | Martin Mueller <martinmueller(at)northwestern(dot)edu> |
|---|---|
| To: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
| Subject: | a back up question |
| Date: | 2017-12-05 21:52:28 |
| Message-ID: | 001039C7-15DF-4A44-B0B9-3E100C9D68D3@northwestern.edu |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Are there rules for thumb for deciding when you can dump a whole database and when you’d be better off dumping groups of tables? I have a database that has around 100 tables, some of them quite large, and right now the data directory is well over 100GB. My hunch is that I should divide and conquer, but I don’t have a clear sense of what counts as “too big” these days. Nor do I have a clear sense of whether the constraints have to do with overall size, the number of tables, or machine memory (my machine has 32GB of memory).
Is 10GB a good practical limit to keep in mind?
| From | Date | Subject | |
|---|---|---|---|
| Next Message | David G. Johnston | 2017-12-05 21:59:29 | Re: a back up question |
| Previous Message | John R Pierce | 2017-12-05 21:24:21 | Re: Feature idea: Dynamic Data Making |