From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
Cc: | James Coleman <jtc331(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Binary search in ScalarArrayOpExpr for OR'd constant arrays |
Date: | 2021-04-12 23:23:07 |
Message-ID: | CAApHDvq_G0dn-tfX-sPCwOdxoXUUL19XSxh0OsgYPn4n97YzOg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, 11 Apr 2021 at 10:38, Tomas Vondra
<tomas(dot)vondra(at)enterprisedb(dot)com> wrote:
> I wonder what's the relationship between the length of the IN list and
> the minimum number of rows needed for the hash to start winning.
I made the attached spreadsheet which demonstrates the crossover point
using the costs that I coded into cost_qual_eval_walker().
It basically shows, for large arrays, that there are fairly
significant benefits to hashing for just 2 lookups and not hashing
only just wins for 1 lookup. However, the cost model does not account
for allocating memory for the hash table, which is far from free.
You can adjust the number of items in the IN clause by changing the
value in cell B1. The values in B2 and B3 are what I saw the planner
set when I tested with both INT and TEXT types.
David
Attachment | Content-Type | Size |
---|---|---|
cost_comparison_hashed_vs_non-hashed_saops.ods | application/vnd.oasis.opendocument.spreadsheet | 19.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2021-04-12 23:30:32 | Re: Teaching users how they can get the most out of HOT in Postgres 14 |
Previous Message | Bruce Momjian | 2021-04-12 23:22:37 | Re: Have I found an interval arithmetic bug? |