From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Kyotaro HORIGUCHI <horiguchi(dot)kyotaro(at)lab(dot)ntt(dot)co(dot)jp> |
Subject: | Re: [DESIGN] ParallelAppend |
Date: | 2015-11-20 13:36:50 |
Message-ID: | CA+Tgmobw3mCOeBKL72z5szTZqgc1_Quaqr9RoEe1CFy6KwerBQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Nov 20, 2015 at 12:45 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Okay, but I think that's not what I am talking about. I am talking about
> below code in cost_seqscan:
>
> - if (nworkers > 0)
>
> - run_cost = run_cost / (nworkers + 0.5);
>
> + if (path->parallel_degree > 0)
>
> + run_cost = run_cost / (path->parallel_degree + 0.5);
>
>
> It will consider 50% of master backends effort for scan of each child
> relation,
> does that look correct to you? Wouldn't 50% of master backends effort be
> considered to scan all the child relations?
In the code you originally wrote, you were adding 1 there rather than
0.5. That meant you were expecting the leader to do as much work as
each of its workers, which is clearly a bad estimate, because the
leader also has to do the work of gathering tuples from the workers.
0.5 might not be the right value, but it's surely better than 1.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Kouhei Kaigai | 2015-11-20 13:45:17 | Re: Foreign join pushdown vs EvalPlanQual |
Previous Message | Etsuro Fujita | 2015-11-20 12:15:23 | Re: Foreign join pushdown vs EvalPlanQual |