| From: | Don Bowman <don(at)sandvine(dot)com> |
|---|---|
| To: | "'pgsql-performance(at)postgresql(dot)org'" <pgsql-performance(at)postgresql(dot)org> |
| Subject: | not using index for select min(...) |
| Date: | 2003-01-31 21:12:38 |
| Message-ID: | FE045D4D9F7AED4CBFF1B3B813C8533701023616@mail.sandvine.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers pgsql-performance |
I have a table which is very large (~65K rows). I have
a column in it which is indexed, and I wish to use for
a join. I'm finding that I'm using a sequential scan
for this when selecting a MIN.
I've boiled this down to something like this:
=> create table X( value int primary key );
=> explain select min(value) from x;
Aggregate (cost=22.50..22.50 rows=1 width=4)
-> Seq Scan on x (cost=0.00..20.00 rows=1000 width=4)
=> \d x
Table "public.x"
Column | Type | Modifiers
--------+---------+-----------
value | integer | not null
Indexes: x_pkey primary key btree (value)
Why wouldn't I be doing an index scan on this table?
--don
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Christopher Browne | 2003-01-31 22:07:34 | Re: [mail] Re: Windows Build System |
| Previous Message | Kurt Roeckx | 2003-01-31 21:11:47 | Re: sync() |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Andrew Sullivan | 2003-01-31 23:31:04 | Re: not using index for select min(...) |
| Previous Message | Josh Berkus | 2003-01-31 18:44:00 | Re: One large v. many small (fwd) |