From: | René Fournier <m5(at)renefournier(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Tuning PostgreSQL for very large database |
Date: | 2011-11-06 16:51:13 |
Message-ID: | DC01E6C0-7008-4D25-8810-2BF52776306B@renefournier.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Just wondering what I can do to squeeze out more performance of my database application? Here's my configuration:
- Mac mini server
- Core i7 quad-core at 2GHz
- 16GB memory
- Dedicated fast SSD (two SSDs in the server)
- Mac OS X 10.7.2 (*not* using OS X Server)
- PostgreSQL 9.05
- PostGIS 1.5.3
- Tiger Geocoder 2010 database (from build scripts from http://svn.osgeo.org/postgis/trunk/extras/tiger_geocoder/tiger_2010/)
- Database size: ~90GB
I should say, this box does more than PostgreSQL geocoding/reverse-geocoding, so reasonably only half of the memory should be allotted to PostgreSQL.
Coming from MySQL, I would normally play with the my.cnf, using my-huge.cnf as a start. But I'm new to PostgreSQL and PostGIS (w/ a big database), so I was wondering if anyone had suggestions on tuning parameters (also, which files, etc.) Thanks!
…Rene
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-11-06 16:54:37 | Re: Strange problem with create table as select * from table; |
Previous Message | John R Pierce | 2011-11-06 16:32:31 | Re: Named / preparsed / preplaned(prepared) queries - Feature proposal |