From: | "Jonas Henriksen" <jonas(dot)f(dot)henriksen(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Slow query and indexes... |
Date: | 2007-05-07 13:53:09 |
Message-ID: | 51518a4f0705070653l7d6723eel924bdd8e06419e28@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I'm trying to figure out how to make postgres utilize my indexes on a table.
this query:
>> explain analyze SELECT max(date_time) FROM data_values;
Goes fast and returns:
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Result (cost=0.08..0.09 rows=1 width=0) (actual time=0.108..0.111
rows=1 loops=1)
InitPlan
-> Limit (cost=0.00..0.08 rows=1 width=8) (actual
time=0.090..0.092 rows=1 loops=1)
-> Index Scan Backward using
data_values_data_date_time_index on data_values (cost=0.00..58113.06
rows=765121 width=8) (actual time=0.078..0.078 rows=1 loops=1)
Filter: (date_time IS NOT NULL)
Total runtime: 0.204 ms
(6 rows)
while if I add a GROUP BY data_logger the query uses a seq scan and a
lot of time:
>> explain analyze SELECT max(date_time) FROM data_values GROUP BY
data_logger_id;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------
HashAggregate (cost=20171.82..20171.85 rows=3 width=12) (actual
time=3510.500..3510.506 rows=3 loops=1)
-> Seq Scan on data_values (cost=0.00..16346.21 rows=765121
width=12) (actual time=0.039..1598.518 rows=765121 loops=1)
Total runtime: 3510.634 ms
(3 rows)
Tha table contains approx 765000 rows. It has three distinct
data_logger_id's. I can make quick queries on each of them using:
SELECT max(date_time) FROM data_values where data_logger_id=1
I have an index on the date_time field and on the data_logger_id
field, and I ahve also tried to make an index with both date_time and
data_logger_id. Anyone have any idea whats going on, and suggestions
what I should do to speed up my query?
Regards Jonas:)))
Im using PostgreSQL 8.2.3 on windows xp.
My table:
CREATE TABLE data_values
(
data_value_id serial NOT NULL,
data_type_id integer NOT NULL,
data_collection_id integer NOT NULL,
data_logger_id integer NOT NULL,
date_time timestamp without time zone NOT NULL,
lat_wgs84 double precision NOT NULL,
lon_wgs84 double precision NOT NULL,
height integer NOT NULL,
parallell integer NOT NULL DEFAULT 0,
data_value double precision NOT NULL,
sensor_id integer,
CONSTRAINT data_values_pkey PRIMARY KEY (data_value_id),
CONSTRAINT data_values_data_collection_id_fkey FOREIGN KEY
(data_collection_id)
REFERENCES data_collections (data_collection_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT,
CONSTRAINT data_values_data_logger_id_fkey FOREIGN KEY (data_logger_id)
REFERENCES data_loggers (data_logger_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT,
CONSTRAINT data_values_data_type_id_fkey FOREIGN KEY (data_type_id)
REFERENCES data_types (data_type_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT,
CONSTRAINT data_values_sensor_id_fkey FOREIGN KEY (sensor_id)
REFERENCES sensors (sensor_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE RESTRICT,
CONSTRAINT data_values_data_type_id_key UNIQUE (data_type_id,
data_logger_id, date_time, lat_wgs84, lon_wgs84, height, parallell)
);
CREATE INDEX data_values_data_date_time_index
ON data_values
USING btree
(date_time);
CREATE INDEX data_values_data_logger_id_index
ON data_values
USING btree
(data_logger_id);
CREATE INDEX data_values_time_logger_index
ON data_values
USING btree
(data_logger_id, date_time);
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory Gimenez | 2007-05-07 13:56:08 | Large object and pg_restore problem |
Previous Message | Scott Ribe | 2007-05-07 13:44:45 | Re: shmget fails on OS X with proper settings |