From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Experimenting with hash tables inside pg_dump |
Date: | 2021-10-22 01:09:36 |
Message-ID: | 20211022010936.lqheh35auhxcqaif@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2021-10-21 20:22:56 -0400, Tom Lane wrote:
> Andres Freund <andres(at)anarazel(dot)de> writes:
> Yeah, that. I tried doing a system-wide "perf" measurement, and soon
> realized that a big fraction of the time for a "pg_dump -s" run is
> being spent in the planner :-(.
A trick for seeing the proportions of this easily in perf is to start both
postgres and pg_dump pinned to a specific CPU, and profile that cpu. That gets
rid of most of the noise of other programs etc.
> I'm currently experimenting with
> PREPARE'ing pg_dump's repetitive queries, and it's looking very
> promising. More later.
Good idea.
I wonder though if for some of them we should instead replace the per-object
queries with one query returning the information for all objects of a type. It
doesn't make all that much sense that we build and send one query for each
table and index.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Nancarrow | 2021-10-22 01:41:22 | Re: Added schema level support for publication. |
Previous Message | Andres Freund | 2021-10-22 00:47:15 | Re: Experimenting with hash tables inside pg_dump |