From: | Gavin Roy <gavinr(at)aweber(dot)com> |
---|---|
To: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Purpose of pg_dump tar archive format? |
Date: | 2024-06-05 14:22:35 |
Message-ID: | CAFVAjJGcpV1V9tD8r-9mNH64KcYaRQE2D1pTM0Lq8n-GDAWx0g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Jun 4, 2024 at 7:36 PM Ron Johnson <ronljohnsonjr(at)gmail(dot)com> wrote:
> On Tue, Jun 4, 2024 at 3:47 PM Gavin Roy <gavinr(at)aweber(dot)com> wrote:
>
>>
>> On Tue, Jun 4, 2024 at 3:15 PM Ron Johnson <ronljohnsonjr(at)gmail(dot)com>
>> wrote:
>>
>>>
>>> But why tar instead of custom? That was part of my original question.
>>>
>>
>> I've found it pretty useful for programmatically accessing data in a dump
>> for large databases outside of the normal pg_dump/pg_restore workflow. You
>> don't have to seek through one large binary file to get to the data section
>> to get at the data.
>>
>
> Interesting. Please explain, though, since a big tarball _is_ "one large
> binary file" that you have to sequentially scan. (I don't know the
> internal structure of custom format files, and whether they have file
> pointers to each table.)
>
Not if you untar it first.
> Is it because you need individual .dat "COPY" files for something other
> than loading into PG tables (since pg_restore --table=xxxx does that, too),
> and directory format archives can be inconvenient?
>
In the past I've used it for data analysis outside of Postgres.
--
*Gavin M. Roy*
CTO
AWeber
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2024-06-05 15:01:30 | Re: Variant (Untyped) parameter for function/procedure |
Previous Message | Jeremy Smith | 2024-06-05 12:09:34 | Re: Poor performance after restoring database from snapshot on AWS RDS |