From: | Andy Colson <andy(at)squeakycode(dot)net> |
---|---|
To: | ijabz(at)fastmail(dot)fm |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Tuning Postgres for single user manipulating large amounts of data |
Date: | 2010-12-09 14:59:52 |
Message-ID: | 4D00EEE8.70408@squeakycode.net |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12/9/2010 8:50 AM, Andy Colson wrote:
> On 12/9/2010 6:25 AM, Paul Taylor wrote:
>> Hi, Im using Postgres 8.3 on a Macbook Pro Labtop.
>> I using the database with just one db connection to build a lucene
>> search index from some of the data, and Im trying to improve
>> performance. The key thing is that I'm only a single user but
>> manipulating large amounts of data , i.e processing tables with upto 10
>> million rows in them, so I think want to configure Postgres so that it
>> can create large temporary tables in memory
>>
>> I've tried changes various parameters such as shared_buffers, work_mem
>> and checkpoint_segments but I don't really understand what they values
>> are, and the documentation seems to be aimed towards configuring for
>> multiple users, and my changes make things worse. For example my machine
>> has 2GB of memory and I read if using as a dedicated server you should
>> set shared memory to 40% of total memory, but when I increase to more
>> than 30MB Postgres will not start complaining about my SHMMAX limit.
>>
>> Paul
>>
>
> You need to bump up your SHMMAX is your OS.
sorry: SHMMAX _in_ your OS.
its an OS setting not a PG one.
-Andy
From | Date | Subject | |
---|---|---|---|
Next Message | Reid Thompson | 2010-12-09 15:12:20 | Re: Tuning Postgres for single user manipulating large amounts of data |
Previous Message | Andy Colson | 2010-12-09 14:50:50 | Re: Tuning Postgres for single user manipulating large amounts of data |