| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Svetlin Manavski <svetlin(dot)manavski(at)gmail(dot)com> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Join over two tables of 50K records takes 2 hours |
| Date: | 2011-10-14 04:37:48 |
| Message-ID: | 13425.1318567068@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Svetlin Manavski <svetlin(dot)manavski(at)gmail(dot)com> writes:
> I am running 9.03 with the settings listed below. I have a prohibitively
> slow query in an application which has an overall good performance:
It's slow because the planner is choosing a nestloop join on the
strength of its estimate that there's only a half dozen rows to be
joined. You need to figure out why those rowcount estimates are so bad.
I suspect that you've shot yourself in the foot by raising
autovacuum_analyze_threshold so high --- most likely, none of those
tables have ever gotten analyzed. And what's with the high
autovacuum_naptime setting? You might need to increase
default_statistics_target too, but first see if a manual ANALYZE makes
things better.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2011-10-14 04:38:16 | Re: Join over two tables of 50K records takes 2 hours |
| Previous Message | Tom Lane | 2011-10-14 04:25:53 | Re: Rapidly finding maximal rows |