From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | Fahiz Mohamed <fahiz(at)netwidz(dot)com> |
Cc: | Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Specific query taking time to process |
Date: | 2019-12-11 20:09:19 |
Message-ID: | CAHOFxGqxRD-=4+GD4egx7r421thgugxHBN-VNAVLi4D5D=sD7w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
This seems beyond me at this point, but I am curious if you also
vacuumed alf_node_properties and alf_node tables and checked when they last
got (auto)vacuumed/analyzed. With default configs for autovacuum parameters
and tables with that many rows, they don't qualify for autovacuum very
often. I don't have much experience with tables in excess of 50 million
rows because of manual sharding clients data.
You mention work_mem is set differently. Did you try setting work_mem back
to 4MB in session on instance 1 just to test the query? I don't know if
work_mem is included in planning stage, but I would think it may be
considered. It would be odd for more available memory to end up with a
slower plan, but I like to eliminate variables whenever possible.
It might be worthwhile to see about increasing default_statistics_target to
get more specific stats, but that can result in a dramatic increase in
planning time for even simple queries.
Hopefully one of the real experts chimes in.
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2019-12-11 21:14:44 | Re: Specific query taking time to process |
Previous Message | Fahiz Mohamed | 2019-12-11 19:53:57 | Re: Specific query taking time to process |