From: | Karl Czajkowski <karlcz(at)isi(dot)edu> |
---|---|
To: | Chris Wilson <chris+postgresql(at)qwirx(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org, george(dot)saklatvala(at)cantabcapital(dot)com |
Subject: | Re: Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows each |
Date: | 2017-06-26 17:01:20 |
Message-ID: | 20170626170120.GB27236@moraine.isi.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Jun 26, Chris Wilson modulated:
> ...
> In your case, the equivalent hack would be to compile the small
> dimension tables into big CASE statements I suppose...
>
>
> Nice idea! I tried this but unfortunately it made the query 16 seconds
> slower (up to 22 seconds) instead of faster.
Other possible rewrites to try instead of joins:
-- replace the case statement with a scalar subquery
-- replace the case statement with a stored procedure wrapping that scalar subquery
and declare the procedure as STABLE or even IMMUTABLE
These are shots in the dark, but seem easy enough to experiment with and might
behave differently if the query planner realizes it can cache results for
repeated use of the same ~100 input values.
Karl
From | Date | Subject | |
---|---|---|---|
Next Message | Karl Czajkowski | 2017-06-26 20:32:21 | Re: Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows each |
Previous Message | Chris Wilson | 2017-06-26 15:43:04 | Re: Fwd: Slow query from ~7M rows, joined to two tables of ~100 rows each |