Re: Performance of a Query

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: "Kumar, Virendra" <Virendra(dot)Kumar(at)guycarp(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance of a Query
Date: 2018-01-09 22:08:17
Message-ID: CAOR=d=0cPhUjHhX4wRtBGX6Fu3NRKfMDi4YdTs1nh6DVKeE+Xw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Jan 9, 2018 at 2:18 PM, Kumar, Virendra
<Virendra(dot)Kumar(at)guycarp(dot)com> wrote:
> Hello Gurus,
>
> I am struggling to tune a query which is doing join on top of aggregate for
> around 3 million rows. The plan and SQL is attached to the email.
>
> Below is system Details:
>
> PGSQL version – 10.1
>
> OS – RHEL 3.10.0-693.5.2.el7.x86_64
>
> Binary – Dowloaded from postgres.org compiled and installed.
>
> Hardware – Virtual Machine with 8vCPU and 32GB of RAM, on XFS filesystem.

I uploaded your query plan here: https://explain.depesz.com/s/14r6

The most expensive part is the merge join at the end.

Lines like this one: "Buffers: shared hit=676 read=306596, temp
read=135840 written=135972"

Tell me that your sorts etc are spilling to disk, so the first thing
to try is upping work_mem a bit. Don't go crazy, as it can run your
machine out of memory if you do. but doubling or tripling it and
seeing the effect on the query performance is a good place to start.

The good news is that most of your row estimates are about right, so
the query planner is doing what it can to make the query fast, but I'm
guessing if you get the work_mem high enough it will switch from a
merge join to a hash_join or something more efficient for large
numbers of rows.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Kumar, Virendra 2018-01-09 22:25:59 RE: Performance of a Query
Previous Message Jeff Janes 2018-01-09 21:33:05 Re: Need Help on wal_compression