From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Rushabh Lathia <rushabh(dot)lathia(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Peter Eisentraut <peter(dot)eisentraut(at)2ndquadrant(dot)com>, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: crashes due to setting max_parallel_workers=0 |
Date: | 2017-03-27 16:26:17 |
Message-ID: | 32460.1490631977@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Mon, Mar 27, 2017 at 9:54 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Since this has now come up twice, I suggest adding a comment there
>> that explains why we're intentionally ignoring max_parallel_workers.
> Good idea. How about the attached?
WFM ... but seems like there should be some flavor of this statement
in the user-facing docs too (ie, "max_parallel_workers_per_gather >
max_parallel_workers is a bad idea unless you're trying to test what
happens when a plan can't get all the workers it planned for"). The
existing text makes some vague allusions suggesting that the two
GUCs might be interrelated, but I think it could be improved.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ashutosh Sharma | 2017-03-27 16:34:30 | Re: segfault in hot standby for hash indexes |
Previous Message | Teodor Sigaev | 2017-03-27 16:25:49 | Re: Potential data loss of 2PC files |