From: | Игорь Выскорко <vyskorko(dot)igor(at)yandex(dot)ru> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Jsonb first level keys statistic |
Date: | 2020-01-28 13:50:07 |
Message-ID: | 7291301580219407@sas1-2bf44b70450e.qloud-c.yandex.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi!
I ran into a problem which caused by wrong rows count planner expecting to get when using ?| operator against jsonb field.
To illustrate the issue:
create table tbl_test (a jsonb);
insert into tbl_test(a)
select jsonb_build_object('val1', 1)
from generate_series(1, 100000);
analyse tbl_test;
explain analyze
select count(*)
from tbl_test
where a ?| '{val1}'::text[];
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1986.25..1986.26 rows=1 width=8) (actual time=55.264..55.264 rows=1 loops=1)
-> Seq Scan on tbl_test (cost=0.00..1986.00 rows=100 width=0) (actual time=0.026..43.886 rows=100000 loops=1)
Filter: (a ?| '{val1}'::text[])
Planning time: 0.065 ms
Execution time: 55.310 ms
rows=100 when real value is 100000. In current simple situation it's not an issue, but in real rather complicated queries it leads to wrong execution plan.
How can I help planner to get accurate statistic about keys distribution in jsonb field?
version I'm using: PostgreSQL 9.6.16 on x86_64-pc-linux-gnu
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Lewis | 2020-01-28 17:42:10 | Re: Jsonb first level keys statistic |
Previous Message | Dennis Schwan | 2020-01-28 10:02:10 | Load Peaks on Standby/Replication Server using Luks/dmcrypt |