Re: pg_dump, pg_dumpall and data durability

From: Andres Freund <andres(at)anarazel(dot)de>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Albe Laurenz <laurenz(dot)albe(at)wien(dot)gv(dot)at>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: pg_dump, pg_dumpall and data durability
Date: 2016-11-13 09:18:41
Message-ID: 20161113091841.llyvzskvjuo27m5f@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi,

On 2016-11-08 18:18:01 -0500, Tom Lane wrote:
> I think this might be better addressed by adding something to backup.sgml
> pointing out that you'd better fsync or sync your backups before assuming
> that they can't be lost.

How does a normal user do that? I don't think there's a cross-platform
advice we can give, and even on *nix the answer basically is 'sync;
sync;' which is a pretty big hammer, and might be completely
unacceptable on a busy server.

Regards,

Andres

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2016-11-13 09:20:06 Re: Do we need use more meaningful variables to replace 0 in catalog head files?
Previous Message Michael Paquier 2016-11-13 07:06:36 Re: WAL consistency check facility