From: | Thom Brown <thom(at)linux(dot)com> |
---|---|
To: | pgsql-performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Unused index influencing sequential scan plan |
Date: | 2012-10-18 16:11:51 |
Message-ID: | CAA-aLv4aukjF=03XNV4o2UKWqweGd+XaHK0y1aTsgGGXmzVcDA@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi all,
I've created a test table containing 21 million random dates and
times, but I get wildly different results when I introduce a
functional index then ANALYSE again, even though it doesn't use the
index:
postgres=# CREATE TABLE test (id serial, sampledate timestamp);
CREATE TABLE
postgres=# INSERT INTO test (sampledate) SELECT '1970-01-01
00:00:00'::timestamp + (random()*1350561945 || ' seconds')::interval
FROM generate_series(1,21000000);
INSERT 0 21000000
postgres=# VACUUM;
VACUUM
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
QUERY PLAN
----------------------------------------------------------------------
HashAggregate (cost=481014.00..481016.50 rows=200 width=8)
-> Seq Scan on test (cost=0.00..376014.00 rows=21000000 width=8)
(2 rows)
postgres=# ANALYSE;
ANALYZE
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
QUERY PLAN
----------------------------------------------------------------------------
GroupAggregate (cost=4078473.42..4498473.90 rows=21000024 width=8)
-> Sort (cost=4078473.42..4130973.48 rows=21000024 width=8)
Sort Key: (date_part('month'::text, sampledate))
-> Seq Scan on test (cost=0.00..376014.30 rows=21000024 width=8)
(4 rows)
postgres=# CREATE INDEX idx_test_sampledate_month ON test
(extract(month FROM sampledate));
CREATE INDEX
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
QUERY PLAN
----------------------------------------------------------------------------
GroupAggregate (cost=4078470.03..4498470.03 rows=21000000 width=8)
-> Sort (cost=4078470.03..4130970.03 rows=21000000 width=8)
Sort Key: (date_part('month'::text, sampledate))
-> Seq Scan on test (cost=0.00..376014.00 rows=21000000 width=8)
(4 rows)
postgres=# ANALYSE;
ANALYZE
postgres=# EXPLAIN SELECT extract(month FROM sampledate), count(*)
FROM test GROUP BY extract(month FROM sampledate);
QUERY PLAN
----------------------------------------------------------------------
HashAggregate (cost=481012.85..481013.00 rows=12 width=8)
-> Seq Scan on test (cost=0.00..376013.17 rows=20999934 width=8)
(2 rows)
The estimate is down to almost a 10th of what it was before. What's going on?
And as a side note, how come it's impossible to get the planner to use
an index-only scan to satisfy the query (disabling sequential and
regular index scans)?
--
Thom
From | Date | Subject | |
---|---|---|---|
Next Message | Thom Brown | 2012-10-18 16:13:53 | Re: Unused index influencing sequential scan plan |
Previous Message | Albe Laurenz | 2012-10-18 07:03:13 | Re: LIKE op with B-Tree Index? |