From: | Richard Huxton <dev(at)archonet(dot)com> |
---|---|
To: | Abbath <abbath(at)invitel(dot)hu> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: tsearch is slow |
Date: | 2006-03-15 14:35:26 |
Message-ID: | 4418262E.8040802@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Abbath wrote:
> is slow for the first time (7-15 sec), but then using the same keyword
> next time it is fast (10-100 ms). The reason is, as I read, first time
> it is not cached at all, but next time the index pages are
> cached so it is fast.
>
> I think in a real word application, in this form, it is useless, because
> the 15 sec search time is not allowable for any user.
What, never? Even if this facility is only used once a year by one user
and you have 1000 other users who need their queries to complete in 0.2
secs at most? What you mean is that it's not useful for *your*
application - don't assume the same applies to all applications.
> Is there a way
> to initially cache the index or tune some postgres parameter? (I tried
> to increase "shared_buffers", "effective_cache_size", "work_mem" but
> had no effect on it) Or I should look for another search technology? (I
> just want to provide a 2 sec maximum search time at 1 million records,
> I think it is not a big expectation nowadays)
If you want to force the data to be cached, just put a cron-job in to
run a query for "abc" or whatever once a minute.
Of course, if it turns out that your users aren't running the query very
often then you're wasting resources, and if they are running it often
then again you're wasting resources. But - if you really need to, that's
the solution.
--
Richard Huxton
Archonet Ltd
From | Date | Subject | |
---|---|---|---|
Next Message | Linda | 2006-03-15 14:39:13 | Re: question about postgresql time intervals |
Previous Message | Stephen Frost | 2006-03-15 14:34:02 | Re: Wisconsin Circuit Court Access (WCCA) on |