From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Dilip Kumar <dilipbalaut(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: why not parallel seq scan for slow functions |
Date: | 2017-08-18 11:18:02 |
Message-ID: | CAA4eK1KUYk8XbYwnK3CE9VAm_w_oJmX-x-3+_FPrRV0BQYhr7g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Aug 17, 2017 at 2:45 PM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
> On Thu, Aug 17, 2017 at 2:09 PM, Dilip Kumar <dilipbalaut(at)gmail(dot)com> wrote:
>>
>> Either we can pass "num_gene" to merge_clump or we can store num_gene
>> in the root. And inside merge_clump we can check. Do you see some more
>> complexity?
>>
I think something like that should work.
> After putting some more thought I see one more problem but not sure
> whether we can solve it easily. Now, if we skip generating the gather
> path at top level node then our cost comparison while adding the
> element to pool will not be correct as we are skipping some of the
> paths (gather path). And, it's very much possible that the path1 is
> cheaper than path2 without adding gather on top of it but with gather,
> path2 can be cheaper.
>
I think that should not matter because the costing of gather is mainly
based on a number of rows and that should be same for both path1 and
path2 in this case.
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2017-08-18 11:22:29 | Re: Parallel Hash take II |
Previous Message | Amit Kapila | 2017-08-18 10:52:04 | Re: [HACKERS] [postgresql 10 beta3] unrecognized node type: 90 |