From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Alex Ignatov <a(dot)ignatov(at)postgrespro(dot)ru>, Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Parallel sec scan in plpgsql |
Date: | 2016-09-22 12:36:42 |
Message-ID: | CAA4eK1+deWHDvWPvBXAXPpGH=dhRWgt485A_HvLEYaWU3JBpKA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 20, 2016 at 8:31 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Sep 20, 2016 at 9:24 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> I think here point is that for any case where there is count of rows
>> to be selected, we disable parallelism. There are many genuine cases
>> like select count(*) into cnt ... which will run to completion, but as
>> plpgsql passes row count to be 1 or 2, it doesn't enter parallel mode.
>> There are couple other cases like that. Do you see a reason for not
>> enabling parallelism for such cases?
>
> If we can somehow know that the rowcount which is passed is greater
> than or equal to the actual number of rows which will be generated,
> then it's fine to enable parallelism.
>
I think for certain cases like into clause, the rows passed will be
equal to actual number of rows, otherwise it will generate error. So
we can pass that information in executor layer. Another kind of cases
which are worth considering are when from plpgsql we fetch limited
rows at-a-time, but we fetch till end like the case of
exec_stmt_return_query().
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Verite | 2016-09-22 12:37:53 | Re: [PATCH] get_home_path: use HOME |
Previous Message | Yury Zhuravlev | 2016-09-22 12:02:55 | Re: File system operations. |