From: | "Curtis Faith" <curtis(at)galtair(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Parallel Executors [was RE: Threaded Sorting] |
Date: | 2002-10-06 09:17:59 |
Message-ID: | DMEEJMCDOJAKPPFACMPMKEDKCEAA.curtis@galtair.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
tom lane wrote:
> "Curtis Faith" <curtis(at)galtair(dot)com> writes:
> > What about splitting out parsing, optimization and plan generation from
> > execution and having a separate pool of exececutor processes.
>
> > As an optimizer finished with a query plan it would initiate execution
> > by grabbing an executor from a pool and passing it the plan.
>
> So different executors would potentially handle the queries from a
> single transaction? How will you deal with pushing transaction-local
> state from one to the other?
>
> Even if you restrict it to switching at transaction boundaries, you
> still have session-local state (at minimum user ID and SET settings)
> to worry about.
Hmmm, what transaction boundaries did you mean? Since we are talking
about single statement parallization, there must be some specific
internal semantics that you believe need isolation. It seems like
we'd be able to get most of the benefit and restrict the parallization
in a way that would preserve this isolation but I'm curious what
you were specifically referring to?
The current transaction/user state seems to be stored in process
global space. This could be changed to be a sointer to a struct
stored in a back-end specific shared memory area which would be
accessed by the executor process at execution start. The backend
would destroy and recreate the shared memory and restart execution
in the case where an executor process dies much like the postmaster
does with backends now.
To the extent the executor process might make changes to the state,
which I'd try to avoid if possible (don't know if it is), the
executors could obtain locks, otherwise if the executions were
constrained to isolated elements (changes to different indexes for
example) it seems like it would be possible using an architecture
where you have:
Main Executor: Responsible for updating global meta data from
each sub-executor and assembling the results of multiple executions.
In the case of multiple executor sorts, the main executor would
perform a merge sort on the results of it and it's subordinates
pre-sorted sub-sets of the relation.
Subordinate Executor: Executes sub-plans and returns results or
meta-data update information into front-end shared memory directly.
To make this optimal, the index code would have to be changed to
support the idea of partial scans. In the case of btrees it would
be pretty easy using the root page to figure out what index values
delineated different 1/2's, 1/3's, 1/4's etc. of the index space.
I'm not sure what you'd have to do to support this for table scans as
I don't know the PostgreSQL tuple storage mechanism, yet.
This does not seem like too much architectural complexity or
performance overhead (even for the single executor case) for a big
gain for complex query performance.
> Being able to apply multiple CPUs to a single query is attractive,
> but I've not yet seen schemes for it that don't look like the extra
> CPU power would be chewed up in overhead :-(.
Do you remember specifc overhead problems/issues?
- Curtis
From | Date | Subject | |
---|---|---|---|
Next Message | Curt Sampson | 2002-10-06 15:06:11 | Re: Improving speed of copy |
Previous Message | Hannu Krosing | 2002-10-06 05:37:31 | Re: Proposed LogWriter Scheme, WAS: Potential Large |