From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Silvio Moioli <moio(at)suse(dot)de>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Increasing work_mem slows down query, why? |
Date: | 2020-03-30 16:49:22 |
Message-ID: | CAFj8pRCDcXOHqKHPztwxSAKkvGufo56-ThJdM=hG6rogcrquDg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
po 30. 3. 2020 v 18:36 odesílatel Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> napsal:
> Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> writes:
> > CTE scan has only 1100 rows, public.rhnpackagecapability has 490964
> rows.
> > But planner does hash from public.rhnpackagecapability table. It cannot
> be
> > very effective.
>
> [ shrug... ] Without stats on the CTE output, the planner is very
> leery of putting it on the inside of a hash join. The CTE might
> produce output that ends up in just a few hash buckets, degrading
> the join to something not much better than a nested loop. As long
> as there's enough memory to hash the known-well-distributed table,
> putting it on the inside is safer and no costlier.
>
ok
Regards
Pavel
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | PG Bug reporting form | 2020-04-02 03:52:25 | BUG #16334: We recently upgraded PG version from 9.5 to 10.10 and system performance is not so good |
Previous Message | Tom Lane | 2020-03-30 16:36:17 | Re: Increasing work_mem slows down query, why? |