Removing overhead commands in parallel dump/restore

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: pgsql-hackers(at)postgreSQL(dot)org
Subject: Removing overhead commands in parallel dump/restore
Date: 2016-06-01 14:57:53
Message-ID: 5086.1464793073@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

While testing parallel dump/restore over the past few days, I noticed that
it seemed to do an awful lot of duplicative SET commands, which I traced
to the fact that restore was doing _doSetFixedOutputState(AH) in the wrong
place, ie, once per TOC entry not once per worker. Another thing that's
useless overhead is that lockTableForWorker() is doing an actual SQL query
to fetch the name of a table that we already have at hand. Poking around
in this area also convinced me that it was pretty weird for CloneArchive
to be managing encoding, and only encoding, when cloning a dump
connection; that should be handled by setup_connection. I also noticed
several unchecked strdup() calls that of course should be pg_strdup().

I put together the attached patch that cleans all this up. It's hard to
show any noticeable performance difference, but the query log certainly
looks cleaner. Any objections?

regards, tom lane

Attachment Content-Type Size
parallel-overhead-reduction.patch text/x-diff 8.3 KB

Browse pgsql-hackers by date

  From Date Subject
Next Message Andreas Karlsson 2016-06-01 14:59:04 Re: Parallel safety tagging of extension functions
Previous Message Jim Nasby 2016-06-01 14:52:17 Re: Floating point comparison inconsistencies of the geometric types