From: | "Sean Davis" <sdavis2(at)mail(dot)nih(dot)gov> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: Building full-text index |
Date: | 2007-11-16 02:57:00 |
Message-ID: | 264855a00711151857p6f4d3e46vd9d953162c728fdf@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
On Nov 15, 2007 9:51 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> "Sean Davis" <sdavis2(at)mail(dot)nih(dot)gov> writes:
> > I am trying to build a full-text index (gin(to_tsvector('english',
> > title || abstract))) on about 18 million abstracts and titles from
> > medical literature. However, I keep getting out-of-memory errors. (I
> > am on a 32Gb linux system with maintenance_work_mem set to 20Gb and
> > shared buffers at 4Gb; postgres 8.3beta). Does creation of a
> > full-text index require that the entire index fit into memory?
>
> I can't reproduce any memory-leak issue here. I wonder whether your
> maintenance_work_mem setting is optimistically large (like, higher
> than the ulimit restriction on the postmaster).
Thanks, Tom. ulimit -a shows unlimited, but there may be something
else going on. I'll try leaving it lower and see what that does for
me.
Sean
From | Date | Subject | |
---|---|---|---|
Next Message | Swanepoel, Barrie | 2007-11-16 05:25:40 | Cronjob errors |
Previous Message | Tom Lane | 2007-11-16 02:51:51 | Re: Building full-text index |