Incorrect estimation of HashJoin rows resulted from inaccurate small table statistics

From: Quan Zongliang <quanzongliang(at)yeah(dot)net>
To: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Incorrect estimation of HashJoin rows resulted from inaccurate small table statistics
Date: 2023-06-16 09:25:16
Message-ID: bd581c3c-68b5-8ddc-c23c-d3c96f0cd983@yeah.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


We have a small table with only 23 rows and 21 values.

The resulting MCV and histogram is as follows
stanumbers1 | {0.08695652,0.08695652}
stavalues1 | {v1,v2}
stavalues2 |
{v3,v4,v5,v6,v7,v8,v9,v10,v11,v12,v13,v14,v15,v16,v17,v18,v19,v20,v21}

An incorrect number of rows was estimated when HashJoin was done with
another large table (about 2 million rows).

Hash Join (cost=1.52..92414.61 rows=2035023 width=0) (actual
time=1.943..1528.983 rows=3902 loops=1)

The reason is that the MCV of the small table excludes values with rows
of 1. Put them in the MCV in the statistics to get the correct result.

Using the conservative samplerows <= attstattarget doesn't completely
solve this problem. It can solve this case.

After modification we get statistics without histogram:
stanumbers1 | {0.08695652,0.08695652,0.04347826,0.04347826, ... }
stavalues1 | {v,v2, ... }

And we have the right estimates:
Hash Join (cost=1.52..72100.69 rows=3631 width=0) (actual
time=1.447..1268.385 rows=3902 loops=1)

Regards,

--
Quan Zongliang
Beijing Vastdata Co., LTD

Attachment Content-Type Size
analyze.diff text/plain 456 bytes

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Smith 2023-06-16 09:26:09 Re: Use the enum value CRS_EXPORT_SNAPSHOT instead of "0"
Previous Message Masahiko Sawada 2023-06-16 08:16:22 Re: Use the enum value CRS_EXPORT_SNAPSHOT instead of "0"