From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | twoflower <standa(dot)kurik(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org, Oleg Bartunov <obartunov(at)gmail(dot)com>, Teodor Sigaev <teodor(at)sigaev(dot)ru> |
Subject: | Re: Text search dictionary vs. the C locale |
Date: | 2017-07-02 16:06:49 |
Message-ID: | 28705.1499011609@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
twoflower <standa(dot)kurik(at)gmail(dot)com> writes:
> I am having problems creating an Ispell-based text search dictionary for
> Czech language.
> Issuing the following command:
> create text search dictionary czech_ispell (
> template = ispell,
> dictfile = czech_ispell,
> affFile = czech_ispell
> );
> ends with
> ERROR: syntax error
> CONTEXT: line 252 of configuration file
> "/usr/share/postgresql/9.6/tsearch_data/czech_ispell.affix": " . > TŘIA
> The dictionary files are in UTF-8. The database cluster was initialized with
> initdb --locale=C --encoding=UTF8
Presumably the problem is that the dictionary file parsing functions
reject anything that doesn't satisfy t_isalpha() (unless it matches
t_isspace()) and in C locale that's not going to accept very much.
I wonder why we're doing it like that. It seems like it'd often be
useful to load dictionary files that don't match the database's
prevailing locale. Do we really need the t_isalpha tests, or would
it be good enough to assume that anything that isn't t_isspace is
part of a word?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Gmail | 2017-07-02 17:11:12 | Re: Text search dictionary vs. the C locale |
Previous Message | Adrian Klaver | 2017-07-02 14:05:01 | Re: Need help on compiling postgres source code from cloned repo |