From: | Aleksandr Parfenov <a(dot)parfenov(at)postgrespro(dot)ru> |
---|---|
To: | Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Teodor Sigaev <teodor(at)sigaev(dot)ru>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Flexible configuration for full-text search |
Date: | 2018-09-11 10:31:50 |
Message-ID: | 20180911173150.421868d8@asp437-ThinkPad-L380 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello hackers!
As I wrote few weeks ago, there is a issue with stopwords processing in
proposed syntax for full-text configurations. I want to separate word
normalization and stopwords detection to two separate dictionaries. The
problem is how to configure stopword detection dictionary.
The cause of the problem is counting stopwords, but not using any
lexemes for them. However, do we have to count stopwords during words
counting or can we ignore them like unknown words? The problem I see is
backward compatibility, since we have to regenerate all queries and
vectors. But is it real problem or we can change its behavior in this
way?
--
Aleksandr Parfenov
Postgres Professional: http://www.postgrespro.com
Russian Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Masahiko Sawada | 2018-09-11 10:35:43 | Re: CREATE ROUTINE MAPPING |
Previous Message | Simon Riggs | 2018-09-11 09:54:16 | Re: StandbyAcquireAccessExclusiveLock doesn't necessarily |