From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_dump test instability |
Date: | 2018-08-27 15:59:44 |
Message-ID: | 20180827155943.GP3326@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greetings,
* Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> Stephen Frost <sfrost(at)snowman(dot)net> writes:
> > * Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> >> However, at least for the directory-format case (which I think is the
> >> only one supported for parallel restore), we could make it compare the
> >> file sizes of the TABLE DATA items. That'd work pretty well as a proxy
> >> for both the amount of effort needed for table restore, and the amount
> >> of effort needed to build indexes on the tables afterwards.
>
> > Parallel restore also works w/ custom-format dumps.
>
> Really. Well then the existing code is even more broken, because it
> only does this sorting for directory output:
>
> /* If we do a parallel dump, we want the largest tables to go first */
> if (archiveFormat == archDirectory && numWorkers > 1)
> sortDataAndIndexObjectsBySize(dobjs, numObjs);
>
> so that parallel restore is completely left in the lurch with a
> custom-format dump.
Sorry for not being clear- it's only possible to parallel *dump* to a
directory-format dump, and the above code is for performing that
sort-by-size before executing a parallel dump. One might wonder why
there's the check for archiveFormat at all though- numWorkers shouldn't
be able to be >1 except in the case where the archiveFormat supports
parallel dump, and if it supports parallel dump, then we should try to
dump out the tables largest-first.
Parallel *restore* can be done from either a custom-format dump or from
a directory-format dump. I agree that we should seperate the concerns
and perform independent sorting on the restore side of things based on
the relative sizes of tables in the dump (be it custom format or
directory format). While compression might make us not exactly correct
on the restore side, I expect that we'll generally be close enough to
avoid most cases where a single worker gets stuck working on a large
table at the end after all the other work is done.
> But I imagine we can get some measure of table data size out of a custom
> dump too.
I would think so.
Thanks!
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2018-08-27 17:28:22 | More parallel pg_dump bogosities |
Previous Message | Alexander Korotkov | 2018-08-27 15:38:40 | Re: [HACKERS] WIP: long transactions on hot standby feedback replica / proof of concept |