From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
Cc: | Jan Urbański <j(dot)urbanski(at)students(dot)mimuw(dot)edu(dot)pl>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Google Summer of Code 2008 |
Date: | 2008-03-08 20:13:18 |
Message-ID: | 6430.1205007198@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> writes:
> On Sat, 8 Mar 2008, Jan Urbaski wrote:
>> I have a feeling that in many cases identifying the top 50 to 300 lexemes
>> would be enough to talk about text search selectivity with a degree of
>> confidence. At least we wouldn't give overly low estimates for queries
>> looking for very popular words, which I believe is worse than givng an overly
>> high estimate for a obscure query (am I wrong here?).
> Unfortunately, selectivity estimation for query is much difficult than
> just estimate frequency of individual word.
It'd be an oversimplification, sure, but almost any degree of smarts
would be a huge improvement over what we have now ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Michał Zaborowski | 2008-03-08 20:56:41 | constraint with no check |
Previous Message | Oleg Bartunov | 2008-03-08 19:29:36 | Re: Google Summer of Code 2008 |