回复:Re: 回复:Re: speed up pg_upgrade with large number of tables

From: 杨伯宇(长堂) <yangboyu(dot)yby(at)alibaba-inc(dot)com>
To: "Nathan Bossart" <nathandbossart(at)gmail(dot)com>
Cc: "Daniel Gustafsson" <daniel(at)yesql(dot)se>, "pgsql-hackers(at)lists(dot)postgresql(dot)org" <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: 回复:Re: 回复:Re: speed up pg_upgrade with large number of tables
Date: 2024-07-08 07:22:36
Message-ID: c00591ff-0203-479c-8547-b734f6ce3b29.yangboyu.yby@alibaba-inc.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> Thanks! Since you mentioned that you have multiple databases with 1M+
> databases, you might also be interested in commit 2329cad. That should
> speed up the pg_dump step quite a bit.
Wow, I noticed this commit(2329cad) when it appeared in commitfest. It has
doubled the speed of pg_dump in this scenario. Thank you for your effort!

Besides, https://commitfest.postgresql.org/48/4995/ seems insufficient to
this situation. Some time-consuming functions like check_for_data_types_usage
are not yet able to run in parallel. But these patches could be a great
starting point for a more efficient parallelism implementation. Maybe we can
do it later.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message jian he 2024-07-08 07:41:03 Re: Doc Rework: Section 9.16.13 SQL/JSON Query Functions
Previous Message Bertrand Drouvot 2024-07-08 07:22:32 Re: Pluggable cumulative statistics