From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | kimaidou <kimaidou(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Aggregate and many LEFT JOIN |
Date: | 2019-02-25 18:29:40 |
Message-ID: | CAHOFxGpb-MdAMWmmHJs48tV+DAxUi57P2XKoLnsqEaVSDXJcFA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Feb 25, 2019 at 2:44 AM kimaidou <kimaidou(at)gmail(dot)com> wrote:
> I have better results with this version. Basically, I run a first query
> only made for aggregation, and then do a JOIN to get other needed data.
>
> * SQL : http://paste.debian.net/1070007/
> * EXPLAIN: https://explain.depesz.com/s/D0l
>
> Not really "fast", but I gained 30%
>
It still seems that disk sort and everything after that is where the query
plan dies. It seems odd that it went to disk if work_mem was already 250MB.
Can you allocate more as a test? As an alternative, if this is a frequently
needed data, can you aggregate this data and keep a summarized copy updated
periodically?
From | Date | Subject | |
---|---|---|---|
Next Message | support@mekong.be | 2019-02-25 19:23:35 | Re: Query slow for new participants |
Previous Message | Corey Huinker | 2019-02-25 16:30:33 | Re: Massive parallel queue table causes index deterioration, but REINDEX fails with deadlocks. |