From: | Dimitrios Apostolou <jimis(at)gmx(dot)net> |
---|---|
To: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Experience and feedback on pg_restore --data-only |
Date: | 2025-03-24 14:24:17 |
Message-ID: | 5f1ebeda-f080-cb31-75c0-ce2211ea348f@gmx.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, 23 Mar 2025, Laurenz Albe wrote:
> On Thu, 2025-03-20 at 23:48 +0100, Dimitrios Apostolou wrote:
>> Performance issues: (important as my db size is >5TB)
>>
>> * WAL writes: I didn't manage to avoid writing to the WAL, despite having
>> setting wal_level=minimal. I even wrote my own function to ALTER all
>> tables to UNLOGGED, but failed with "could not change table T to
>> unlogged because it references logged table". I'm out of ideas on this
>> one.
>
> You'd have to create an load the table in the same transaction, that is,
> you'd have to run pg_restore with --single-transaction.
That would restore the schema from the dump, while I want to create the
schema from the SQL code in version control.
Something that might work, would be for pg_restore to issue a TRUNCATE
before the COPY. I believe this would require superuser privelege though,
that I would prefer to avoid. Currently I issue TRUNCATE for all tables
manually before running pg_restore, but of course this is in a different
transaction so it doesn't help.
By the way do you see potential problems with using --single-transaction
to restore billion-rows tables?
Thank you,
Dimitris
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2025-03-24 15:31:49 | Re: Experience and feedback on pg_restore --data-only |
Previous Message | Cars Jeeva | 2025-03-24 14:18:27 | Today Page is not accessible - postgresql-15.spec |