| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Davin Potts <davin(at)appliomics(dot)com> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: poor performing plan from analyze vs. fast default plan pre-analyze on new database |
| Date: | 2009-06-03 16:27:57 |
| Message-ID: | 6499.1244046477@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Davin Potts <davin(at)appliomics(dot)com> writes:
> How to approach manipulating the execution plan back to something more
> efficient? What characteristics of the table could have induced
> analyze to suggest the much slower query plan?
What's evidently happening is that the planner is backing off from using
a hashed subplan because it thinks the hashtable will require more than
work_mem. Is 646400 a reasonably good estimate of the number of rows
that the sub-select will produce? If it's a large overestimate, then
perhaps increasing the stats target for content.hash will help. If
it's good, then what you want to do is increase work_mem to allow the
planner to use the better plan.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Carey | 2009-06-03 17:08:23 | Re: poor performing plan from analyze vs. fast default plan pre-analyze on new database |
| Previous Message | Grzegorz Jaśkiewicz | 2009-06-03 16:07:37 | Re: poor performing plan from analyze vs. fast default plan pre-analyze on new database |