From: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: modeling parallel contention (was: Parallel Append implementation) |
Date: | 2017-05-05 02:36:58 |
Message-ID: | CAKJS1f99vDEmsYVNp9AfzMM0-KoYr0m6YOtxyw5sEQj45xGw8Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 3 May 2017 at 07:13, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> Multiple people (including David Rowley
> as well as folks here at EnterpriseDB) have demonstrated that for
> certain queries, we can actually use a lot more workers and everything
> works great. The problem is that for other queries, using a lot of
> workers works terribly. The planner doesn't know how to figure out
> which it'll be - and honestly, I don't either.
For me, it seems pretty much related to the number of tuples processed
on a worker, vs how many they return. As a general rule, I'd say the
higher this ratio, the higher the efficiency ratio will be for the
worker. Although that's not taking into account contention points
where workers must wait for fellow workers to complete some operation.
I think parallel_tuple_cost is a good GUC to have, perhaps we can be
smarter about the use of it when deciding on how many workers should
be used.
By efficiency, I mean that if a query takes 10 seconds in a normal
serial plan, and adding 1 worker, it takes 5 seconds, it would be 100%
efficient to use another worker. I charted this in [1]. It would have
been interesting to chart the same in a query that returned a larger
number of groups, but I ran out of time, but I think it pretty much
goes, without testing, that more groups == less efficiency. Which'll
be due to more overhead in parallel tuple communication, and more work
to do in the serial portion of the plan.
[1] https://blog.2ndquadrant.com/parallel-monster-benchmark
--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2017-05-05 02:40:43 | Re: modeling parallel contention (was: Parallel Append implementation) |
Previous Message | Andres Freund | 2017-05-05 02:36:46 | Re: modeling parallel contention (was: Parallel Append implementation) |