From: | Michael Engelhart <mengelhart(at)mac(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | sequential scan performance |
Date: | 2005-05-29 13:27:26 |
Message-ID: | D520F8B3-20D6-4272-A6D6-8B690871DE73@mac.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi -
I have a table of about 3 million rows of city "aliases" that I need
to query using LIKE - for example:
select * from city_alias where city_name like '%FRANCISCO'
When I do an EXPLAIN ANALYZE on the above query, the result is:
Seq Scan on city_alias (cost=0.00..59282.31 rows=2 width=42)
(actual time=73.369..3330.281 rows=407 loops=1)
Filter: ((name)::text ~~ '%FRANCISCO'::text)
Total runtime: 3330.524 ms
(3 rows)
this is a query that our system needs to do a LOT. Is there any way
to improve the performance on this either with changes to our query
or by configuring the database deployment? We have an index on
city_name but when using the % operator on the front of the query
string postgresql can't use the index .
Thanks for any help.
Mike
From | Date | Subject | |
---|---|---|---|
Next Message | Steinar H. Gunderson | 2005-05-29 13:47:13 | Re: sequential scan performance |
Previous Message | ellis | 2005-05-29 04:28:25 | Re: Need help to decide Mysql vs Postgres |