Re: Aggregate and many LEFT JOIN

From: kimaidou <kimaidou(at)gmail(dot)com>
To: Michael Lewis <mlewis(at)entrata(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Justin Pryzby <pryzby(at)telsasoft(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Aggregate and many LEFT JOIN
Date: 2019-02-26 12:54:00
Message-ID: CAMKXKO5qvnLYCYRJux4yaN=Lj-phCgnE+KBere7VS7d31z78JA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I manage to avoid the disk sort after performing a VACUUM ANALYSE;
And with a session work_mem = '250MB'

* SQL http://paste.debian.net/1070207/
* EXPLAIN https://explain.depesz.com/s/nJ2y

It stills spent 16s
It seems this kind of query will need better hardware to scale...

Thanks for your help

Le lun. 25 févr. 2019 à 19:30, Michael Lewis <mlewis(at)entrata(dot)com> a écrit :

>
>
> On Mon, Feb 25, 2019 at 2:44 AM kimaidou <kimaidou(at)gmail(dot)com> wrote:
>
>> I have better results with this version. Basically, I run a first query
>> only made for aggregation, and then do a JOIN to get other needed data.
>>
>> * SQL : http://paste.debian.net/1070007/
>> * EXPLAIN: https://explain.depesz.com/s/D0l
>>
>> Not really "fast", but I gained 30%
>>
>
>
> It still seems that disk sort and everything after that is where the query
> plan dies. It seems odd that it went to disk if work_mem was already 250MB.
> Can you allocate more as a test? As an alternative, if this is a frequently
> needed data, can you aggregate this data and keep a summarized copy updated
> periodically?
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Justin Pryzby 2019-02-26 13:51:34 Re: Aggregate and many LEFT JOIN
Previous Message MichaelDBA 2019-02-26 00:30:18 Re: Query slow for new participants