From: | Guillaume Cottenceau <gc(at)mnc(dot)ch> |
---|---|
To: | Michael Lewis <mlewis(at)entrata(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: much slower query in production |
Date: | 2020-02-26 18:04:02 |
Message-ID: | 87sgixmdil.fsf@mnc.ch |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Michael Lewis <mlewis 'at' entrata.com> writes:
> By the way, I expect the time is cut in half while heap fetches stays similar because the index is now in OS cache on the
> second run and didn't need to be fetched from disk. Definitely need to check on vacuuming as Justin says. If you have a fairly
> active system, you would need to run this query many times in order to push other stuff out of shared_buffers and get this
> query to perform more like it does on dev.
>
> Do you have the option to re-write the query or is this generated by an ORM? You are forcing the looping as I read this query.
> If you aggregate before you join, then the system should be able to do a single scan of the index, aggregate, then join those
> relatively few rows to the multicards table records.
>
> SELECT transaction_uid, COALESCE( sub.count, 0 ) AS count FROM multicards LEFT JOIN (SELECT multicard_uid, COUNT(*) AS count
> FROM tickets GROUP BY multicard_uid ) AS sub ON sub.multicard_uid = multicards.uid;
Thanks for this hint! I always hit this fact that I never write
good queries using explicit joins :/
Execution time (before vacuuming the table as adviced by Justin)
down 38x to 44509ms using this query :)
Real query was an UPDATE of the multicards table to set the count
value. I rewrote this using your approach but I think I lack what
coalesce did in your query, this would update only the rows where
count >= 1 obviously:
UPDATE multicards
SET defacements = count
FROM ( SELECT multicard_uid, COUNT(*) AS count FROM tickets GROUP BY multicard_uid ) AS sub
WHERE uid = multicard_uid;
Any hinted solution to do that in one pass? I could do a first
pass setting defacements = 0, but that would produce more garbage :/
Thanks!
--
Guillaume Cottenceau
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Lewis | 2020-02-26 18:18:53 | Re: much slower query in production |
Previous Message | Guillaume Cottenceau | 2020-02-26 18:02:05 | Re: much slower query in production |