Re: Purpose of pg_dump tar archive format?

From: Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
To: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Purpose of pg_dump tar archive format?
Date: 2024-06-04 23:36:34
Message-ID: CANzqJaDonyXAvPpNi5ZX3WUrCCxXfPHAzOKqab29Zq-WZ5UEaQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Jun 4, 2024 at 3:47 PM Gavin Roy <gavinr(at)aweber(dot)com> wrote:

>
> On Tue, Jun 4, 2024 at 3:15 PM Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
> wrote:
>
>>
>> But why tar instead of custom? That was part of my original question.
>>
>
> I've found it pretty useful for programmatically accessing data in a dump
> for large databases outside of the normal pg_dump/pg_restore workflow. You
> don't have to seek through one large binary file to get to the data section
> to get at the data.
>

Interesting. Please explain, though, since a big tarball _is_ "one large
binary file" that you have to sequentially scan. (I don't know the
internal structure of custom format files, and whether they have file
pointers to each table.)

Is it because you need individual .dat "COPY" files for something other
than loading into PG tables (since pg_restore --table=xxxx does that, too),
and directory format archives can be inconvenient?

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message sud 2024-06-05 06:25:32 Re: Long running query causing XID limit breach
Previous Message Adrian Klaver 2024-06-04 23:05:08 Re: Questions on logical replication