From: | "Brett W(dot) McCoy" <bmccoy(at)chapelperilous(dot)net> |
---|---|
To: | Boris <koester(at)x-itec(dot)de> |
Cc: | <pgsql-novice(at)postgresql(dot)org> |
Subject: | Re: Re[2]: Blob question -((( |
Date: | 2001-01-01 19:21:35 |
Message-ID: | Pine.LNX.4.30.0101011413110.28136-100000@chapelperilous.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Mon, 1 Jan 2001, Boris wrote:
> BWM> 50-100k COLUMNs per row? Or are you talking about binary files of
> BWM> 50-100K? You definitely need to use the large object fetaures of
> BWM> PostgreSQL.
>
> Yes I need approx 50-100k to store ascii data for later
> fulltext-search -((
Ah, now I see. large objects may not be the solution, if you are storing
text, because it won't be searchable (unless you build up an external
search like mnoGoSearch, but that's really for web stuff). However, all
is not lost -- you can either break up your text into distinct fields,
like title, author, abstract, text paragraph 1, text paragraph 2, and so
on (this will entail a good bit of analysis and design of proper data
structures on your part), and use the full text search that is in the
contrib directory of the soruce distribution, or you can go the bleeding
edge route and use the beta TOAST project, which will allow one to have
row sizes of greater than the current limitations. The latter may not be
a good solution for a production database.
See http://postgresql.readysetnet.com/projects/devel-toast.html for more
details on TOAST.
-- Brett
http://www.chapelperilous.net/~bmccoy/
---------------------------------------------------------------------------
How come everyone's going so slow if it's called rush hour?
From | Date | Subject | |
---|---|---|---|
Next Message | Boris | 2001-01-01 22:17:44 | Blob question -((( |
Previous Message | Brett W. McCoy | 2001-01-01 17:09:10 | Re: Blob question -((( |