From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Boom filters for hash joins (was: A design for amcheck heapam verification) |
Date: | 2017-09-18 17:29:32 |
Message-ID: | 2283.1505755772@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Tue, Aug 29, 2017 at 10:22 PM, Thomas Munro
> <thomas(dot)munro(at)enterprisedb(dot)com> wrote:
>> (2) We could push a Bloom filter down to scans
>> (many other databases do this, and at least one person has tried this
>> with PostgreSQL and found it to pay off[1]).
> I think the hard part is going to be figuring out a query planner
> framework for this, because pushing down the Bloom filter down to the
> scan changes the cost and the row-count of the scan.
Uh, why does the planner need to be involved at all? This seems like
strictly an execution-time optimization. Even if you wanted to try
to account for it in costing, I think the reliability of the estimate
would be nil, never mind any questions about whether the planner's
structure makes it easy to apply such an adjustment.
Personally though I would not bother with (2); I think (1) would
capture most of the win for a very small fraction of the complication.
Just for starters, I do not think (2) works for batched hashes.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2017-09-18 17:52:40 | Re: Boom filters for hash joins (was: A design for amcheck heapam verification) |
Previous Message | Magnus Hagander | 2017-09-18 17:14:39 | Re: Reporting query on crash even if completed |