Re: Behavior of debug_parallel_query=regress

From: David Rowley <dgrowleyml(at)gmail(dot)com>
To: Rafsun Masud Prince <rafsun(dot)masud(dot)99(at)gmail(dot)com>
Cc: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Behavior of debug_parallel_query=regress
Date: 2024-02-27 11:34:06
Message-ID: CAApHDvoz7iwUAGbcEFYMi+vZbuPC-YDLnoPCqT37OfK9Nk8oAw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, 27 Feb 2024 at 23:23, Rafsun Masud Prince
<rafsun(dot)masud(dot)99(at)gmail(dot)com> wrote:
> I am looking for a combination of the 'off' and 'regress' state, which is:
> use parallel if improves performance + suppress context line (if
> parallel is used)
>
> Our project, Apache AGE, has a regression test for cypher MATCH queries. If
> that test is run repeatedly, the optimizer chooses a parallel plan at a random
> iteration (the issue is reported here:
> https://github.com/apache/age/issues/1439)
> In that case, the test fails due to the addition of 'CONTEXT: parallel worker'
> line in the diff.

In our regression tests, we normally adjust the parallel_setup_cost
and parallel_tuple_cost and maybe
min_parallel_index_scan_size/min_parallel_table_scan_size to force a
parallel plan when we want one. If we don't want one, we'll set
max_parallel_workers_per_gather to 0.

I don't think the feature you propose would be any good as if the test
is looking at the EXPLAIN output, then the parallel plan won't look
anything like the serial plan. All debug_parallel_query = 'regress'
does is add a Gather node with a single worker at the top of the plan
then suppress it from EXPLAIN. In that case, the EXPLAIN looks the
same as the serial plan. If the planner chooses a parallel plan of
its own accord, then it'd look nothing like the serial plan.

David

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Dominique Devienne 2024-02-27 12:12:45 PQftype(copy_rset) returns zero OIDs???
Previous Message Dominique Devienne 2024-02-27 09:02:09 PQgetCopyData() and "big" rows