From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Joachim Wieland <joe(at)mcknight(dot)de>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: WIP patch for parallel pg_dump |
Date: | 2010-12-03 00:21:28 |
Message-ID: | 4CF83808.2010307@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/02/2010 07:13 PM, Robert Haas wrote:
> On Thu, Dec 2, 2010 at 5:32 PM, Tom Lane<tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Andrew Dunstan<andrew(at)dunslane(dot)net> writes:
>>> On 12/02/2010 05:01 PM, Tom Lane wrote:
>>>> In the past, proposals for this have always been rejected on the grounds
>>>> that it's impossible to assure a consistent dump if different
>>>> connections are used to read different tables. I fail to understand
>>>> why that consideration can be allowed to go by the wayside now.
>>> Well, snapshot cloning should allow that objection to be overcome, no?
>> Possibly, but we need to see that patch first not second.
> Yes, by all means let's allow the perfect to be the enemy of the good.
>
That seems like a bit of an easy shot. Requiring that parallel pg_dump
produce a dump that is as consistent as non-parallel pg_dump currently
produces isn't unreasonable. It's not stopping us moving forward, it's
just not wanting to go backwards.
And it shouldn't be terribly hard. IIRC Joachim has already done some
work on it.
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2010-12-03 00:26:35 | Re: DELETE with LIMIT (or my first hack) |
Previous Message | Josh Berkus | 2010-12-03 00:19:04 | Re: should we set hint bits without dirtying the page? |