| From: | Joseph Shraibman <jks(at)selectacast(dot)net> | 
|---|---|
| To: | pgsql-general(at)postgresql(dot)org | 
| Subject: | performance tuning | 
| Date: | 2002-12-04 00:43:36 | 
| Message-ID: | asjj3l$1r5k$1@news.hub.org | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-general | 
I have a query where postgres (7.2.1) seriously overestimates the cost of using an index. 
   When I do a  set enable_seqscan = false; The query goes from:
                           ->  Aggregate  (cost=49656.10..49656.10 rows=1 width=12)
                                 ->  Merge Join  (cost=49062.25..49655.18 rows=367 width=12)
                                       ->  Sort  (cost=11794.87..11794.87 rows=15220 width=6)
                                             ->  Seq Scan on u  (cost=0.00..10737.55 
rows=15220 width=6)
                                       ->  Sort  (cost=37267.38..37267.38 rows=136643 width=6)
                                             ->  Seq Scan on d  (cost=0.00..24391.43 
rows=136643 width=6)
--------------------- to -
                                ->  Nested Loop  (cost=0.00..102204.91 rows=367 width=12)
                                       ->  Index Scan using u_pkey_key on u 
(cost=0.00..43167.33 rows=15220 width=6)
                                       ->  Index Scan using d_pkey on d  (cost=0.00..3.86 
rows=1 width=6)
- to -
The first query takes three times as long as the second. Since postgres seems to think 
that the nested loop takes so long do I have to lower cpu_operator_cost to get postgres to 
use the nested loop?
And does 7.3 have any improvements in this area?
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Joseph Shraibman | 2002-12-04 01:01:56 | set in transaction | 
| Previous Message | Matthew Gabeler-Lee | 2002-12-04 00:41:12 | Re: 7.3 no longer using indexes for LIKE queries |