From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Ron <ronljohnsonjr(at)gmail(dot)com> |
Cc: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: pg_dump using anything other than custom and directory |
Date: | 2019-04-12 23:05:31 |
Message-ID: | 8580.1555110331@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Ron <ronljohnsonjr(at)gmail(dot)com> writes:
> In 2019 using supported versions of PostgreSQL, what practical use is there
> to use the tar format, and -- other than migrating trivially sized databases
> to other RDBMSs -- the plain format?
The historical argument for the tar format is that you can get your
data out of it with a standard Unix tool (tar, of course), rather than
having to depend on the availability of pg_restore. Certainly there's
room to argue about how important that really is, but I don't think
the validity of the argument is much different than it was in 2001.
You need to be able to get a plain-text dump if you want to edit
the data or schema at all, which is a pretty common requirement.
However, as long as you're willing to assume the availability of
pg_restore, you can extract plain text from one of the other formats;
so this point isn't a reason not to make your dump in one of the
other formats to begin with.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tim Cross | 2019-04-13 00:44:38 | Re: pg_dump using anything other than custom and directory |
Previous Message | Derek Viljoen | 2019-04-12 23:04:00 | Re: pg_dump using anything other than custom and directory |