From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | Janning Vygen <vygen(at)kicktipp(dot)de>, pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: slow pg_dump with bytea |
Date: | 2023-10-20 11:52:17 |
Message-ID: | cb80c6f3c21a798d29f7db46c78e650bbcd95705.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Fri, 2023-10-20 at 12:26 +0200, Janning Vygen wrote:
> I don't know if the PG developers are aware of this:
>
> https://serverfault.com/questions/1081642/postgresql-13-speed-up-pg-dump-to-5-minutes-instead-of-70-minutes
>
> But this question is quite famous and many users like the solution.
> So maybe you can fix it by changing the pg_dump process to not compress
> any bytea data.
Doesn't sound like a bug to me.
Compression is determined when "pg_dump" starts. How should it guess that
there is a binary column with compressed data in some table? Even if it did,
I wouldn't feel well with a "pg_dump" with enough artificial intelligence to
do this automatically for me (and get it wrong occasionally).
In addition, I don't think that this problem is limited to compressed
binary data. In my experience, compressed dumps are always slower than
uncompressed ones. It is a speed vs. size thing.
By the way, PostgreSQL v16 introduced compression with "lz4" und "zstd"
to "pg_dump", which is much faster.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Janning Vygen | 2023-10-20 12:08:17 | Re: slow pg_dump with bytea |
Previous Message | Alexander Lakhin | 2023-10-20 11:00:00 | Re: BUG #18014: Releasing catcache entries makes schema_to_xmlschema() fail when parallel workers are used |