From: | Abelard Hoffman <abelardhoffman(at)gmail(dot)com> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Will pg_repack improve this query performance? |
Date: | 2014-10-15 02:33:46 |
Message-ID: | CACEJHMiAAk-eNZ_FM4tsDSDvpw+2Zpioa2H0m4xg7V=Xiwkykw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I believe this query is well optimized, but it's slow if the all the blocks
aren't already in memory.
Here's example explain output. You can see it takes over 7 seconds to run
when it needs to hit the disk, and almost all of it is related to checking
if the user has "messages."
http://explain.depesz.com/s/BLT
On a second run, it's extremely fast (< 50ms). So I'm thinking it's a lack
of clustering on the "Index Cond: (to_id = users.user_id)" that's the
culprit.
I'm afraid of using CLUSTER due to the exclusive lock, but I found
pg_repack while researching:
http://reorg.github.io/pg_repack/
Does it seem likely that doing an --order-by on the to_id column would have
a significant impact in this case? pg_repack seems pretty stable and safe
at this point?
I am going to try and test this in a dev environment first but wanted
feedback if this seemed like a good direction?
Thanks.
--
Best,
AH
From | Date | Subject | |
---|---|---|---|
Next Message | Roopeshakumar Narayansa Shalgar (rshalgar) | 2014-10-15 08:17:01 | Any postgres API available to get errorcode for PQerrorMessage |
Previous Message | Vick Khera | 2014-10-14 19:17:48 | Re: copying a large database to change encoding |