From: | Aleksander Alekseev <aleksander(at)timescale(dot)com> |
---|---|
To: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | [PATCH] pg_dump: lock tables in batches |
Date: | 2022-12-07 15:08:45 |
Message-ID: | CAJ7c6TO4z1+OBa-R+fC8FnaUgbEWJUf2Kq=nRngTW5EXtKru2g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi hackers,
A colleague of mine reported a slight inconvenience with pg_dump.
He is dumping the data from a remote server. There are several
thousands of tables in the database. Making a dump locally and/or
using pg_basebackup and/or logical replication is not an option. So
what pg_dump currently does is sending LOCK TABLE queries one after
another. Every query needs an extra round trip. So if we have let's
say 2000 tables and every round trip takes 100 ms then ~3.5 minutes
are spent in the not most useful way.
What he proposes is taking the locks in batches. I.e. instead of:
LOCK TABLE foo IN ACCESS SHARE MODE;
LOCK TABLE bar IN ACCESS SHARE MODE;
do:
LOCK TABLE foo, bar, ... IN ACCESS SHARE MODE;
The proposed patch makes this change. It's pretty straightforward and
as a side effect saves a bit of network traffic too.
Thoughts?
--
Best regards,
Aleksander Alekseev
Attachment | Content-Type | Size |
---|---|---|
v1-0001-pg_dump-lock-tables-in-batches.patch | application/octet-stream | 2.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-12-07 15:23:22 | Re: Error-safe user functions |
Previous Message | Andrew Dunstan | 2022-12-07 15:04:01 | Re: Error-safe user functions |