optimizing server for a 10 million row table

From: Tony Caduto <tony(dot)caduto(at)amsoftwaredesign(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: optimizing server for a 10 million row table
Date: 2006-01-21 18:16:04
Message-ID: 43D27A64.6080103@amsoftwaredesign.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,
we have a client that has a table that holds transactions from sales and
it is now at 10 million rows. it was in MS Access and they could only
hold 2 million rows, so we installed Postgres for them and they dumped
10 million rows from the mainframe into Postgres.

I was just wondering if anyone had suggestions for optimizing the
postgresql.conf file and how much OS shared memory I should
reserve(RedHat is set by default to have 128mb of shared memory).

We are running on RedHat Enterprise Linux 4 AS on a dual processor P4
Xeon with 2.5 gb of ram.

The table in question has 20 fields and I could post the DDL of the
table if needed.

Thanks,

--
Tony Caduto
AM Software Design
Home of PG Lightning Admin for Postgresql
http://www.amsoftwaredesign.com

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2006-01-21 18:27:53 Re: [GENERAL] Creation of tsearch2 index is very
Previous Message Tom Lane 2006-01-21 18:10:08 Re: mac os x compile failure