| From: | Rod Taylor <rbt(at)rbt(dot)ca> |
|---|---|
| To: | Reiner Dassing <dassing(at)wettzell(dot)ifag(dot)de> |
| Cc: | pgsql-sql(at)postgresql(dot)org |
| Subject: | Re: Indices are not used by the optimizer |
| Date: | 2003-05-05 13:33:05 |
| Message-ID: | 1052141584.9846.24.camel@jester |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance pgsql-sql |
Are you really expecting 19 million rows to be returned -- are you
really going to use them all?
How about explain analyze output?
Have you tried using a cursor to allow for parallel processing? (pull
1000 rows, do work, pull next 1000 rows, do work, etc.)
> wetter=# explain select * from wetter where epoche > '2001-01-01';
> QUERY PLAN
> -------------------------------------------------------------------------
> Seq Scan on wetter (cost=0.00..614795.55 rows=19054156 width=16)
> Filter: (epoche > '2001-01-01 00:00:00+00'::timestamp with time zone)
> (2 rows)
--
Rod Taylor <rbt(at)rbt(dot)ca>
PGP Key: http://www.rbt.ca/rbtpub.asc
| From | Date | Subject | |
|---|---|---|---|
| Next Message | scott.marlowe | 2003-05-05 16:25:57 | Re: looking for large dataset |
| Previous Message | Achilleus Mantzios | 2003-05-05 12:30:52 | Wrong index usage in 7.3.2 |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Marco Roda | 2003-05-05 13:34:21 | UNICODE and SQL |
| Previous Message | Bruno Wolff III | 2003-05-05 13:32:22 | Re: Best way to delete time stamped data? |