| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Andres Freund <andres(at)anarazel(dot)de> |
| Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
| Subject: | Re: Experimenting with hash tables inside pg_dump |
| Date: | 2021-10-22 00:22:56 |
| Message-ID: | 2608600.1634862176@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Andres Freund <andres(at)anarazel(dot)de> writes:
> Did you measure runtime of pg_dump, or how much CPU it used?
I was looking mostly at wall-clock runtime, though I did notice
that the CPU time looked about the same too.
> I think a lot of
> the time the backend is a bigger bottleneck than pg_dump...
Yeah, that. I tried doing a system-wide "perf" measurement, and soon
realized that a big fraction of the time for a "pg_dump -s" run is
being spent in the planner :-(. I'm currently experimenting with
PREPARE'ing pg_dump's repetitive queries, and it's looking very
promising. More later.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jeff Davis | 2021-10-22 00:24:34 | Re: add retry mechanism for achieving recovery target before emitting FATA error "recovery ended before configured recovery target was reached" |
| Previous Message | Bossart, Nathan | 2021-10-21 23:59:57 | Re: Experimenting with hash tables inside pg_dump |