From: | Peter Geoghegan <pg(at)heroku(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ExecGather() + nworkers |
Date: | 2016-03-07 19:13:19 |
Message-ID: | CAM3SWZQj=LUuvF4MYMZb+B=29VxyNpKRPPfxd7CkVji2MX17sQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Mar 7, 2016 at 4:04 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> Your point is genuine, but OTOH let us say if max_parallel_degree = 1 means
> parallelism is disabled then when somebody sets max_parallel_degree = 2,
> then it looks somewhat odd to me that, it will mean that 1 worker process
> can be used for parallel query.
I'm not sure that that has to be true.
What is the argument for only using one worker process, say in the
case of parallel seq scan? I understand that parallel seq scan can
consume tuples itself, which seems like a good principle, but how far
does it go, and how useful is it in the general case? I'm not
suggesting that it isn't, but I'm not sure.
How common is it for the leader process to do anything other than
coordinate and consume from worker processes?
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | MauMau | 2016-03-07 19:31:17 | Re: How can we expand PostgreSQL ecosystem? |
Previous Message | Robert Haas | 2016-03-07 19:03:08 | Re: Freeze avoidance of very large table. |