From: | Greg Sabino Mullane <htamfids(at)gmail(dot)com> |
---|---|
To: | yudhi s <learnerdatabase99(at)gmail(dot)com> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org, hjp-pgsql(at)hjp(dot)at |
Subject: | Re: Query performance issue |
Date: | 2024-10-22 19:01:55 |
Message-ID: | CAKAnmmKMzXaUeVg-WLVReCOD-=+8GUq8=Uc7a0jfU6DgzPZ_Yg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
To be frank, there is so much wrong with this query that it is hard to know
where to start. But a few top items:
* Make sure all of the tables involved have been analyzed. You might want
to bump default_statistics_target up and see if that helps.
* As mentioned already, increase work_mem, as you have things spilling to
disk (e.g. external merge Disk: 36280kB)
* Don't use the "FROM table1, table2, table3" syntax but use "FROM table1
JOIN table2 ON (...) JOIN table3 ON (...)
* Try not to use subselects. Things like WHERE x IN (SELECT ...) are
expensive and hard to optimize.
* You have useless GROUP BY clauses in there. Remove to simplify the query
* There is no LIMIT. Does the client really need all 135,214 rows?
Cheers,
Greg
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-10-22 19:33:40 | Re: Using Expanded Objects other than Arrays from plpgsql |
Previous Message | Durgamahesh Manne | 2024-10-22 18:58:06 | Lock contention issues with repack |