From: | Karsten Hilbert <Karsten(dot)Hilbert(at)gmx(dot)net> |
---|---|
To: | "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: a back up question |
Date: | 2017-12-06 13:24:43 |
Message-ID: | 20171206132442.GC4346@hermes.hilbert.loc |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Dec 06, 2017 at 12:52:53PM +0000, Martin Mueller wrote:
>> Are there rules for thumb for deciding when you can dump a
>> whole database and when you’d be better off dumping groups of
>> tables?
>> It seems to me we'd have to define the objective of "dumping" first ?
> The objective is to create a backup from which I can
> restore any or all tables in the event of a crash.
I see.
"Any or all" speaks in recommendation of non-plain output
formats _if_ using pg_dump.
> In my case, I use Postgres for my own scholarly purposes.
> Publications of whatever kind are not directly made public
> via the database. I am my only customer, and a service
> interruption, while a nuisance to me, does not create a
> crisis for others. I don’t want to lose my work, but a
> service interruption of a day or a week is no big deal.
In that case I would stick to pg_dump, perhaps with directory
format and then tarred and compressed, until you notice
actual problems (unbearable slowdown of the machine during
backup, running out of disk space).
My 2 cents,
Karsten
--
GPG key ID E4071346 @ eu.pool.sks-keyservers.net
E167 67FD A291 2BEA 73BD 4537 78B9 A9F9 E407 1346
From | Date | Subject | |
---|---|---|---|
Next Message | hmidi slim | 2017-12-06 14:37:21 | Why the planner does not use index for a large amount of data? |
Previous Message | Maltsev Eduard | 2017-12-06 13:11:39 | Does Postgresql 10 query partitions in parallel? |