From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | Joachim Wieland <joe(at)mcknight(dot)de>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Greg Smith <greg(at)2ndquadrant(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, José Arthur Benetasso Villanova <jose(dot)arthur(at)gmail(dot)com> |
Subject: | Re: directory archive format for pg_dump |
Date: | 2010-12-16 22:29:50 |
Message-ID: | 201012162329.51796.andres@anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thursday 16 December 2010 19:33:10 Joachim Wieland wrote:
> On Thu, Dec 16, 2010 at 12:48 PM, Heikki Linnakangas
>
> <heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
> > As soon as we have parallel pg_dump, the next big thing is going to be
> > parallel dump of the same table using multiple processes. Perhaps we
> > should prepare for that in the directory archive format, by allowing the
> > data of a single table to be split into multiple files. That way
> > parallel pg_dump is simple, you just split the table in chunks of
> > roughly the same size, say 10GB each, and launch a process for each
> > chunk, writing to a separate file.
>
> How exactly would you "just split the table in chunks of roughly the
> same size" ? Which queries should pg_dump send to the backend? If it
> just sends a bunch of WHERE queries, the server would still scan the
> same data several times since each pg_dump client would result in a
> seqscan over the full table.
I would suggest implementing < > support for tidscans and doing it in segment
size...
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2010-12-16 22:34:02 | Re: directory archive format for pg_dump |
Previous Message | Josh Kupershmidt | 2010-12-16 21:47:51 | Re: Default mode for shutdown |