From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | Dane Foster <studdugie(at)gmail(dot)com> |
Cc: | psql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Slow query and wrong row estimates for CTE |
Date: | 2021-02-16 15:12:55 |
Message-ID: | CAHOFxGqt+qShpk-7rxNXooqfahKYXuhD2ECKgb4Zc13y4tzy9A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>
> Sort Method: external
> merge Disk: 30760kB
> Worker 0: Sort Method:
> external merge Disk: 30760kB
> Worker 1: Sort Method:
> external merge Disk: 30760kB
>
If you can increase work_mem, even setting it temporarily higher for the
session or transaction, that may dramatically change the plan. The advice
given by Justin particularly about row estimates would be wise to pursue.
I'd wonder how selective that condition of score_name =
'student_performance_index' is in filtering out many of the 9.3 million
tuples in that table and if an index with that as the leading column, or
just an index on that column would be helpful. You'd need to look at
pg_stats for the table and see how many distinct values, and
if student_performance_index is relatively high or low (or not present) in
the MCVs list.
I am not sure if your query does what you want it to do as I admit I didn't
follow your explanation of the desired behavior. My hunch is that you want
to make use of a window function and get rid of one of the CTEs.
From | Date | Subject | |
---|---|---|---|
Next Message | Dane Foster | 2021-02-16 15:19:26 | Re: Slow query and wrong row estimates for CTE |
Previous Message | Justin Pryzby | 2021-02-15 22:32:41 | Re: Slow query and wrong row estimates for CTE |