From: | Andrey Repko <repko(at)sart(dot)must-ipra(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Index not used on group by |
Date: | 2005-09-27 09:14:31 |
Message-ID: | 144520928.20050927121431@sart.must-ipra.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hello all,
I have table ma_data, that contain above 300000 rows.
This table has primary key id, and field alias_id.
I create index (btree)on this field.
Set statistic:
ALTER TABLE "public"."ma_data"
ALTER COLUMN "alias_id" SET STATISTICS 998;
So, when I do something like
SELECT alias_id FROM ma_data GROUP BY alias_id
and have (with seq_scan off):
Group (cost=0.00..1140280.63 rows=32 width=4) (actual time=0.159..2640.090 rows=32 loops=1)
-> Index Scan using reference_9_fk on ma_data (cost=0.00..1139526.57 rows=301624 width=4) (actual time=0.120..1471.128 rows=301624 loops=1)
Total runtime: 2640.407 ms
(3 rows)
As I understand there are some problems with visibility of records,
but some others DBMS used indexes without problems(for example
FireBird)? Or maybe some another information be helpful for me and
community.
--
С наилучшими пожеланиями,
Репко Андрей Владимирович mailto:repko(at)sart(dot)must-ipra(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Ahmad Fajar | 2005-09-27 09:39:55 | Re: Query seem to slow if table have more than 200 million rows |
Previous Message | Ron Peacetree | 2005-09-27 05:09:19 | Re: [PERFORM] A Better External Sort? |