From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel Seq Scan |
Date: | 2014-12-08 17:57:38 |
Message-ID: | CA+TgmoYWK4ePQdNFZY2PU-w=SypyxnnpYx6_B+48O2jQ4QhZAA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Dec 6, 2014 at 7:07 AM, Stephen Frost <sfrost(at)snowman(dot)net> wrote:
> For my 2c, I'd like to see it support exactly what the SeqScan node
> supports and then also what Foreign Scan supports. That would mean we'd
> then be able to push filtering down to the workers which would be great.
> Even better would be figuring out how to parallelize an Append node
> (perhaps only possible when the nodes underneath are all SeqScan or
> ForeignScan nodes) since that would allow us to then parallelize the
> work across multiple tables and remote servers.
I don't see how we can support the stuff ForeignScan does; presumably
any parallelism there is up to the FDW to implement, using whatever
in-core tools we provide. I do agree that parallelizing Append nodes
is useful; but let's get one thing done first before we start trying
to do thing #2.
> I'm not entirely following this. How can the worker be responsible for
> its own "plan" when the information passed to it (per the above
> paragraph..) is pretty minimal? In general, I don't think we need to
> have specifics like "this worker is going to do exactly X" because we
> will eventually need some communication to happen between the worker and
> the master process where the worker can ask for more work because it's
> finished what it was tasked with and the master will need to give it
> another chunk of work to do. I don't think we want exactly what each
> worker process will do to be fully formed at the outset because, even
> with the best information available, given concurrent load on the
> system, it's not going to be perfect and we'll end up starving workers.
> The plan, as formed by the master, should be more along the lines of
> "this is what I'm gonna have my workers do" along w/ how many workers,
> etc, and then it goes and does it. Perhaps for an 'explain analyze' we
> return information about what workers actually *did* what, but that's a
> whole different discussion.
I agree with this. For a first version, I think it's OK to start a
worker up for a particular sequential scan and have it help with that
sequential scan until the scan is completed, and then exit. It should
not, as the present version of the patch does, assign a fixed block
range to each worker; instead, workers should allocate a block or
chunk of blocks to work on until no blocks remain. That way, even if
every worker but one gets stuck, the rest of the scan can still
finish.
Eventually, we will want to be smarter about sharing works between
multiple parts of the plan, but I think it is just fine to leave that
as a future enhancement for now.
>> - Master backend is just responsible for coordination among workers
>> It shares the required information to workers and then fetch the
>> data processed by each worker, by using some more logic, we might
>> be able to make master backend also fetch data from heap rather than
>> doing just co-ordination among workers.
>
> I don't think this is really necessary...
I think it would be an awfully good idea to make this work. The
master thread may be significantly faster than any of the others
because it has no IPC costs. We don't want to leave our best resource
sitting on the bench.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2014-12-08 18:00:48 | Re: jsonb generator functions |
Previous Message | Peter Geoghegan | 2014-12-08 17:52:38 | Re: Doing better at HINTing an appropriate column within errorMissingColumn() |