From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "David Rowley" <dgrowleyml(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Should pg_dump dump larger tables first? |
Date: | 2013-01-29 23:34:30 |
Message-ID: | 13906.1359502470@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"David Rowley" <dgrowleyml(at)gmail(dot)com> writes:
> If pg_dump was to still follow the dependencies of objects, would there be
> any reason why it shouldn't backup larger tables first?
Pretty much every single discussion/complaint about pg_dump's ordering
choices has been about making its behavior more deterministic not less
so. So I can't imagine such a change would go over well with most folks.
Also, it's far from obvious to me that "largest first" is the best rule
anyhow; it's likely to be more complicated than that.
But anyway, the right place to add this sort of consideration is in
pg_restore --parallel, not pg_dump. I don't know how hard it would be
for the scheduler algorithm in there to take table size into account,
but at least in principle it should be possible to find out the size of
the (compressed) table data from examination of the archive file.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2013-01-30 00:40:28 | Re: Hm, table constraints aren't so unique as all that |
Previous Message | Craig Ringer | 2013-01-29 23:15:20 | Re: [sepgsql 2/3] Add db_schema:search permission checks |