Re: much slower query in production

From: Michael Lewis <mlewis(at)entrata(dot)com>
To: Justin Pryzby <pryzby(at)telsasoft(dot)com>
Cc: Guillaume Cottenceau <gc(at)mnc(dot)ch>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: much slower query in production
Date: 2020-02-26 16:52:37
Message-ID: CAHOFxGrEX4FGV=uEtzWdS-MNgh4t5U139TJKPwYFFLfWcBrEfA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

By the way, I expect the time is cut in half while heap fetches stays
similar because the index is now in OS cache on the second run and didn't
need to be fetched from disk. Definitely need to check on vacuuming as
Justin says. If you have a fairly active system, you would need to run this
query many times in order to push other stuff out of shared_buffers and get
this query to perform more like it does on dev.

Do you have the option to re-write the query or is this generated by an
ORM? You are forcing the looping as I read this query. If you aggregate
before you join, then the system should be able to do a single scan of the
index, aggregate, then join those relatively few rows to the multicards
table records.

SELECT transaction_uid, COALESCE( sub.count, 0 ) AS count FROM multicards
LEFT JOIN (SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY
multicard_uid ) AS sub ON sub.multicard_uid = multicards.uid;

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Guillaume Cottenceau 2020-02-26 18:02:05 Re: much slower query in production
Previous Message Justin Pryzby 2020-02-26 16:28:09 Re: much slower query in production