From: | Ron <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Simple IN vs IN values performace |
Date: | 2021-02-22 23:45:06 |
Message-ID: | 7334ed05-d880-d6c5-46ed-24f6b6491a2a@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2/22/21 9:00 AM, Oleksandr Voytsekhovskyy wrote:
> Greetings,
>
> We have queries with IN filters with long list of INT values
>
> Sometimes, they running extremely slow, and I have found suggestion to use
> syntax
> Field IN (VALUES(1465), (1478), ...
> Instead of
> Field IN (1465, 1478, ...
> On some cases it helps, but on other it makes query running 1000+ times slower
Insert the values into a TEMPORARY TABLE, then join that to your main table?
>
> So the issue is:
> - some queries much faster with simple IN
> - some queries much faster with IN + VALUES
>
> Here is explain on case when it’s extremely slow:
>
> -> HashAggregate (cost=5.78..9.62 rows=385 width=4)
> Group Key: ""*VALUES*"".column1"
> -> Values Scan on ""*VALUES*"" (cost=0.00..4.81 rows=385 width=4)"
> What is the right way to pass long INT values list to IN filter?
>
> I am using PostgreSQL 13.1 on Ubuntu
--
Angular momentum makes the world go 'round.
From | Date | Subject | |
---|---|---|---|
Next Message | Santosh Udupi | 2021-02-23 01:08:17 | pg_restore - generated column - not populating |
Previous Message | Tom Lane | 2021-02-22 22:43:05 | Re: fdatasync performance problem with large number of DB files |