From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Hans Buschmann <buschmann(at)nidsa(dot)net> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Andres Freund <andres(at)anarazel(dot)de> |
Subject: | Re: Assorted improvements in pg_dump |
Date: | 2021-10-24 21:10:55 |
Message-ID: | 3780613.1635109855@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hans Buschmann <buschmann(at)nidsa(dot)net> writes:
> Some time ago I experimented with a customer database dump in parallel directory mode -F directory -j (2-4)
> I noticed it took quite long to complete.
> Further investigation showed that in this mode with multiple jobs the tables are processed in decreasing size order, which makes sense to avoid a long tail of a big table in one of the jobs prolonging overall dump time.
> Exactly one table took very long, but seemed to be of moderate size.
> But the size-determination fails to consider the size of toast tables and this table had a big associated toast-table of bytea column(s).
Hmm, yeah, we just use pg_class.relpages for scheduling parallel dumps.
I'd supposed that would be fine, but maybe it's worth being smarter.
I think it should be sufficient to add on the toast table's relpages
value; that's maintained by autovacuum on the same terms as relpages
for regular tables. See 0005 below.
Here's an update of this patch series:
0001 is the same as before, except I changed collectComments and
collectSecLabels to strdup the strings they want and then release
their PGresults. The previous behavior confused valgrind's leak
tracking, which is only a minor annoyance, but I think we can
justify changing it now that these functions don't save all of
the collected comments or seclabels. In particular, we've got
no need for the several thousand comments on built-in objects,
so that that PGresult is at least 100KB bigger than what we're
going to keep.
0002 is updated to account for commit 2acc84c6f.
0003 is the same except I added a missing free().
0004 is a new patch based on an idea from Andres Freund [1]:
in the functions that repetitively issue the same query against
different tables, issue just one query and use a WHERE clause
to restrict the output to the tables we care about. I was
skeptical about this to start with, but it turns out to be
quite a spectacular win. On my machine, the time to pg_dump
the regression database (with "-s") drops from 0.91 seconds
to 0.39 seconds. For a database with 10000 toy tables, the
time drops from 18.1 seconds to 2.3 seconds.
0004 is not committable as-is, because it assumes that the source
server has single-array unnest(), which is not true before 8.4.
We could fix that by using "oid = ANY(array-constant)" conditions
instead, but I'm unsure about the performance properties of that
for large OID arrays on those old server versions. There's a
discussion at [2] about whether it'd be okay to drop pg_dump's
support for pre-8.4 servers; if we do so, then it would become
unnecessary to do anything more here.
0005 implements your suggestion of accounting for TOAST data while
scheduling parallel dumps. I realized while looking at that that
there's a pre-existing bug, which this'd exacerbate: on machines
with 32-bit off_t, dataLength can overflow. Admittedly such machines
are just about extinct in the wild, but we do still trouble to support
the case. So 0005 also includes code to check for overflow and clamp
the result to INT_MAX blocks.
Maybe we should back-patch 0005. OTOH, how likely is it that anyone
is wrangling tables exceeding 16TB on a machine with 32-bit off_t?
Or that poor parallel dump scheduling would be a real problem in
such a case?
Lastly, 0006 implements the other idea we'd discussed in the other
thread: for queries that are issued repetitively but not within a
single pg_dump function invocation, use PREPARE/EXECUTE to cut down
the overhead. This gets only diminishing returns after 0004, but
it still brings "pg_dump -s regression" down from 0.39s to 0.33s,
so maybe it's worth doing. I stopped after caching the plans for
functions/aggregates/operators/types, though. The remaining sorts
of objects aren't likely to appear in typical databases enough times
to be worth worrying over. (This patch will be a net loss if there
are more than zero but less than perhaps 10 instances of an object type,
so there's definitely reasons beyond laziness for not doing more.)
regards, tom lane
[1] https://www.postgresql.org/message-id/20211022055939.z6fihsm7hdzbjttf%40alap3.anarazel.de
[2] https://www.postgresql.org/message-id/flat/2923349.1634942313%40sss.pgh.pa.us
Attachment | Content-Type | Size |
---|---|---|
0001-fix-component-masking-2.patch | text/x-diff | 47.1 KB |
0002-rethink-ACL-handling-2.patch | text/x-diff | 121.1 KB |
0003-move-unsafe-function-calls-2.patch | text/x-diff | 10.0 KB |
0004-bulk-queries-2.patch | text/x-diff | 74.2 KB |
0005-fix-toast-sched-2.patch | text/x-diff | 4.6 KB |
0006-prepare-repeated-queries-2.patch | text/x-diff | 33.9 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Bossart, Nathan | 2021-10-24 21:32:27 | Re: Predefined role pg_maintenance for VACUUM, ANALYZE, CHECKPOINT. |
Previous Message | Bossart, Nathan | 2021-10-24 20:59:48 | Re: Allow pg_signal_backend members to use pg_log_backend_memory_stats(). |