From: | Fabrízio de Royes Mello <fabriziomello(at)gmail(dot)com> |
---|---|
To: | Aleksander Alekseev <aleksander(at)timescale(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: [PATCH] pg_dump: lock tables in batches |
Date: | 2022-12-07 15:30:48 |
Message-ID: | CAFcNs+oCUzgqS5COUXUK_C-=yKUEAp_P45bzm+35De+F-ZG2zg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Dec 7, 2022 at 12:09 PM Aleksander Alekseev <
aleksander(at)timescale(dot)com> wrote:
>
> Hi hackers,
>
> A colleague of mine reported a slight inconvenience with pg_dump.
>
> He is dumping the data from a remote server. There are several
> thousands of tables in the database. Making a dump locally and/or
> using pg_basebackup and/or logical replication is not an option. So
> what pg_dump currently does is sending LOCK TABLE queries one after
> another. Every query needs an extra round trip. So if we have let's
> say 2000 tables and every round trip takes 100 ms then ~3.5 minutes
> are spent in the not most useful way.
>
> What he proposes is taking the locks in batches. I.e. instead of:
>
> LOCK TABLE foo IN ACCESS SHARE MODE;
> LOCK TABLE bar IN ACCESS SHARE MODE;
>
> do:
>
> LOCK TABLE foo, bar, ... IN ACCESS SHARE MODE;
>
> The proposed patch makes this change. It's pretty straightforward and
> as a side effect saves a bit of network traffic too.
>
+1 for that change. It will improve the dump for databases with thousands
of relations.
The code LGTM and it passes in all tests and compiles without any warning.
Regards,
--
Fabrízio de Royes Mello
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2022-12-07 15:33:11 | Re: Error-safe user functions |
Previous Message | Tom Lane | 2022-12-07 15:23:22 | Re: Error-safe user functions |