From: | "Bryan Murphy" <bryan(dot)murphy(at)gmail(dot)com> |
---|---|
To: | "pgsql general" <pgsql-general(at)postgresql(dot)org> |
Subject: | full text index and most frequently used words |
Date: | 2008-02-08 17:34:02 |
Message-ID: | bd8531800802080934t4a4b7acdm9d8bb3285f18756b@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm a bit of a novice writing tsearch2 queries, so forgive me if this
is a basic question.
We have a table with 2million+ records which has a considerable amount
of text content. Some search terms (such as comedy, new, news, music,
etc.) cause a significant performance hit on our web site. There are
simply too many records in the table, and the ranking function takes
too long to rank them all.
We've partially solved this problem by manually identifying
non-performant search queries and pre-caching the results (think
materialized view). However, this process is starting to be become a
burden, and we can't properly anticipate what our community is going
to be searching for in the future.
What I'd like to know is if there is an easy to way to use the full
text index to generate a list of the most common words. I could write
this code manually, but I'm hoping there's a better (simpler) way.
Thanks,
Bryan
From | Date | Subject | |
---|---|---|---|
Next Message | valgog | 2008-02-08 17:53:40 | Trouble with Mixed UTF-8 and Latin1 data |
Previous Message | brian | 2008-02-08 16:53:26 | Re: ERROR: COPY quote must be a single ASCII character |