From: | JB <jimbag(at)kw(dot)igs(dot)net> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | 'LIKE' enhancement |
Date: | 2000-03-07 23:53:06 |
Message-ID: | 38C59662.2F10991C@kw.igs.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi all. I'm not subscribed to the list but I thought I'd drop this in
here as a suggestion. There is a thread in GENERAL called '50MB Table'
for those interested. The summary is, I have a 50MB +- table with about
70,000 records which on wich I was doing LIKE selects. It was taking
approx 20 secs to complete the search. The table is something like...
CREATE TABLE info (
lastname char(50),
street_name char(50),
...(etc omitted)
);
CREATE INDEX nx_info1 ON info (lastname);
CREATE INDEX nx_info2 ON info (street_name);
on which I was doing...
SELECT * FROM info WHERE street_name LIKE 'MAIN%';
...this would take about 20 secs to complete. Because the wildness only
happens at the end of the search string, I changed the query to...
SELECT * FROM info WHERE substring( street_name from 1 to 4 ) = 'MAIN';
...this takes under 2 secs. I wrote a piece of code in python to do this
automatically for me but it seems to me that the parser/optimizer could
take a look at this case and re-write the query with the '=' instead of
the 'LIKE'. I've looked through the 'C' code to see where this could
happen but it is too thick for me to sort out with my schedule, so I
thought I'd make the suggestion here.
cheers
jim
--
If everything is coming your way then you're in the wrong lane.
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2000-03-07 23:53:41 | Re: [BUGS] grant/revoke bug with delete/update |
Previous Message | Bruce Momjian | 2000-03-07 23:48:55 | Re: [HACKERS] xlog.c.patch for cygwin port. |