From: | Mike Rylander <mrylander(at)gmail(dot)com> |
---|---|
To: | Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
Cc: | Rick Jansen <rick(at)rockingstone(dot)nl>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Tsearch2 performance on big database |
Date: | 2005-03-22 12:48:06 |
Message-ID: | b918cf3d050322044847d3f9e1@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 22 Mar 2005 15:36:11 +0300 (MSK), Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> wrote:
> On Tue, 22 Mar 2005, Rick Jansen wrote:
>
> > Hi,
> >
> > I'm looking for a *fast* solution to search thru ~ 4 million records of book
> > descriptions. I've installed PostgreSQL 8.0.1 on a dual opteron server with
> > 8G of memory, running Linux 2.6. I haven't done a lot of tuning on PostgreSQL
> > itself, but here's the settings I have changed so far:
> >
> > shared_buffers = 2000 (anything much bigger says the kernel doesnt allow it,
> > still have to look into that)
>
> use something like
> echo "150000000" > /proc/sys/kernel/shmmax
> to increase shared memory. In your case you could dedicate much more
> memory.
>
> Regards,
> Oleg
And Oleg should know. Unless I'm mistaken, he (co)wrote tsearch2.
Other than shared buffers, I can't imagine what could be causing that
kind of slowness. EXPLAIN ANALYZE, please?
As an example of what I think you *should* be seeing, I have a similar
box (4 procs, but that doesn't matter for one query) and I can search
a column with tens of millions of rows in around a second.
--
Mike Rylander
mrylander(at)gmail(dot)com
GPLS -- PINES Development
Database Developer
http://open-ils.org
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Browne | 2005-03-22 13:09:40 | Re: What about utility to calculate planner cost constants? |
Previous Message | Oleg Bartunov | 2005-03-22 12:36:11 | Re: Tsearch2 performance on big database |