From: | Sebastian Dressler <sebastian(at)swarm64(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Planner misestimation for JOIN with VARCHAR |
Date: | 2020-06-09 18:34:41 |
Message-ID: | FCAF8126-4002-47E5-9ED6-1F04CDE6F18F@swarm64.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Helloes,
I do have a set of tables which contain user data and users can choose to have columns as constrained VARCHAR, limit is typically 100. While users can also choose from different types, quite often they go the VARCHAR route. Furthermore, they can pick PKs almost freely. As a result, I quite often see tables with the following DDL:
CREATE TABLE example_1(
a VARCHAR(100)
, b VARCHAR(100)
, c VARCHAR(100)
, payload TEXT
);
ALTER TABLE example_1 ADD PRIMARY KEY (a, b, c);
Due to processing, these need to be joined together sometimes considering the complete PK. For instance, assume example_1 and example_2 having the same structure as above. Then, when I do
SELECT *
FROM example_1 t1
INNER JOIN example_2 t2 ON(
t1.a = t2.a
AND t1.b = t2.b
AND t1.c = t2.c
);
the planner will very likely estimate a single resulting row for this operation. For instance:
Gather (cost=1510826.53..3100992.19 rows=1 width=138)
Workers Planned: 13
-> Parallel Hash Join (cost=1510726.53..3100892.04 rows=1 width=138)
Hash Cond: (((t1.a)::text = (t2.a)::text) AND ((t1.b)::text = (t2.b)::text) AND ((t1.c)::text = (t1.c)::text))
-> Parallel Seq Scan on example_1 t1 (cost=0.00..1351848.61 rows=7061241 width=69)
-> Parallel Hash (cost=1351848.61..1351848.61 rows=7061241 width=69)
-> Parallel Seq Scan on example_2 t2 (cost=0.00..1351848.61 rows=7061241 width=69)
This does not create a problem when joining just two tables on their own. However, with a more complex query, there will be more than one single-row estimates. Hence, I typically see a nested loop which takes very long to process eventually.
This runs on PG 12, and I ensured that the tables are analyzed, my default_statistics_target is 2500. However, it seems, that with more VARCHARs being in the JOIN, the estimates becomes worse. Given the table definition as above, I wonder whether I have overlooked anything in terms of settings or additional indices which could help here.
Things tried so far without any noticeable change:
- Add an index on top of the whole PK
- Add indexes onto other columns trying to help the JOIN
- Add additional statistics on two related columns
Another idea I had was to make use of generated columns and hash the PKs together to an BIGINT and solely use this for the JOIN. However, this would not work when not all columns of the PK are used for the JOIN.
Thanks,
Sebastian
--
Sebastian Dressler, Solution Architect
+49 30 994 0496 72 | sebastian(at)swarm64(dot)com
Swarm64 AS
Parkveien 41 B | 0258 Oslo | Norway
Registered at Brønnøysundregistrene in Norway under Org.-Number 911 662 787
CEO/Geschäftsführer (Daglig Leder): Thomas Richter; Chairman/Vorsitzender (Styrets Leder): Dr. Sverre Munck
Swarm64 AS Zweigstelle Hive
Ullsteinstr. 120 | 12109 Berlin | Germany
Registered at Amtsgericht Charlottenburg - HRB 154382 B
From | Date | Subject | |
---|---|---|---|
Next Message | Peter | 2020-06-09 19:02:29 | Re: Something else about Redo Logs disappearing |
Previous Message | Peter | 2020-06-09 17:55:21 | Re: Something else about Redo Logs disappearing |