From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | pgsql-performance(at)nullmx(dot)com |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Rewriting DISTINCT and losing performance |
Date: | 2007-05-21 17:34:11 |
Message-ID: | 4651D813.5050100@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Chuck D. wrote:
> On Monday 21 May 2007 03:14, Josh Berkus wrote:
>> Chuck,
>>
>> Can we see the plan?
>>
>> --Josh
>>
>
> Sorry Josh, I guess I could have just used EXPLAIN instead of EXPLAIN
> ANALYZE.
>
> # explain
> SELECT country_id, country_name
> FROM geo.country
> WHERE country_id IN
> (select country_id FROM geo.city)
> ;
> QUERY PLAN
> --------------------------------------------------------------------
> Nested Loop IN Join (cost=0.00..1252.60 rows=155 width=15)
> Join Filter: (country.country_id = city.country_id)
> -> Seq Scan on country (cost=0.00..6.44 rows=244 width=15)
> -> Seq Scan on city (cost=0.00..43409.12 rows=2122712 width=2)
The only thing I can think of is that the CLUSTERing on city.country_id
makes the system think it'll be cheaper to seq-scan the whole table.
I take it you have got 2 million rows in "city"?
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Chuck D. | 2007-05-21 18:17:44 | Re: Rewriting DISTINCT and losing performance |
Previous Message | Jim C. Nasby | 2007-05-21 17:24:06 | Re: pg_stats how-to? |