From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: PostgreSQL Configuration Tool for Dummies |
Date: | 2007-06-19 22:46:37 |
Message-ID: | 200706191546.38140.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Lance,
> The parameters I would think we should calculate are:
>
> max_connections
>
> shared_buffers
>
> work_mem
>
> maintenance_work_mem
>
> effective_cache_size
>
> random_page_cost
Actually, I'm going to argue against messing with random_page_cost. It's a
cannon being used when a slingshot is called for. Instead (and this was
the reason for the "What kind of CPU?" question) you want to reduce the
cpu_* costs. I generally find that if cpu_* are reduced as appropriate to
modern faster cpus, and effective_cache_size is set appropriately, a
random_page_cost of 3.5 seems to work for appropriate choice of index
scans.
If you check out my spreadsheet version of this:
http://pgfoundry.org/docman/view.php/1000106/84/calcfactors.sxc
... you'll see that the approach I found most effective was to create
profiles for each of the types of db applications, and then adjust the
numbers based on those.
Other things to adjust:
wal_buffers
checkpoint_segments
commit_delay
vacuum_delay
autovacuum
Anyway, do you have a pgfoundry ID? I should add you to the project.
--
--Josh
Josh Berkus
PostgreSQL @ Sun
San Francisco
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2007-06-19 23:26:10 | Re: Maintenance question / DB size anomaly... |
Previous Message | Kurt Overberg | 2007-06-19 22:24:21 | Re: Maintenance question / DB size anomaly... |