| From: | Michael Lewis <mlewis(at)entrata(dot)com> |
|---|---|
| To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
| Cc: | "Liu, Xinyu" <liuxy(at)gatech(dot)edu>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
| Subject: | Re: Potential performance issues related to group by and covering index |
| Date: | 2021-03-02 21:04:24 |
| Message-ID: | CAHOFxGpbQqFqqHEovH2VVP9yooyaatNLs5YDw+r6+X+7i6c-xA@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
>
> If we want to do anything much smarter than that like trying every
> combination of the GROUP BY clause, then plan times are likely going
> to explode. The join order search is done based on the chosen query
> pathkeys, which in many queries is the pathkeys for the GROUP BY
> clause (see standard_qp_callback()). This means throughout the join
> search, planner will try and form paths that provide pre-sorted input
> that allows the group by to be implemented efficiently with pre-sorted
> data. You might see Merge Joins rather than Hash Joins, for example.
>
Are there guidelines or principles you could share about writing the group
by clause such that it is more efficient?
| From | Date | Subject | |
|---|---|---|---|
| Next Message | David Rowley | 2021-03-02 21:41:24 | Re: Potential performance issues related to group by and covering index |
| Previous Message | Rodriguez Pablo A | 2021-03-02 11:58:51 | High availability management tool. |