From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Nicolas Barbier <nicolas(dot)barbier(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: WIP: preloading of ispell dictionary |
Date: | 2010-03-23 08:04:50 |
Message-ID: | 162867791003230104p6ff8d946yd6b97c47f660fc6c@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
2010/3/23 Nicolas Barbier <nicolas(dot)barbier(at)gmail(dot)com>:
> 2010/3/23 Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>:
>
>> 2010/3/23 Takahiro Itagaki <itagaki(dot)takahiro(at)oss(dot)ntt(dot)co(dot)jp>:
>>
>>> The fundamental issue seems to be in the slow initialization of
>>> dictionaries. If so, how about adding a pre-complile tool to convert
>>> a dictionary into a binary file, and each backend simply mmap it?
>>
>> It means loading about 25MB from disc. for every first tsearch query -
>> sorry, I don't believe can be good.
>
> The operating system's VM subsystem should make that a non-problem.
> "Loading" is also not the word I would use to indicate what mmap does.
Maybe we can do some manipulation inside memory - I have not any
knowledges about mmap. With Simple Allocator we can have a dictionary
data as one block. Problems are a pointers, but I believe so can be
replaced by offsets.
Personally I dislike idea some dictionary precompiler - it is next
application for maintaining and maybe not necessary. And still you
need a next application for loading.
p.s. I able to serialise czech dictionary, because it use only simply regexp.
Pavel
>
> Nicolas
>
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Farina | 2010-03-23 08:20:21 | Re: Windowing Qual Pushdown |
Previous Message | Pavel Stehule | 2010-03-23 07:58:20 | Re: proposal: more practical view on function's source code |