From: | Peter Geoghegan <pg(at)heroku(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Stephen Frost <sfrost(at)snowman(dot)net>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>, Josh Berkus <josh(at)agliodbs(dot)com> |
Subject: | Re: Auto-tuning work_mem and maintenance_work_mem |
Date: | 2013-10-10 03:15:44 |
Message-ID: | CAM3SWZTSxwXh6U4PN3XfSPegVJkiEEHnTkZaf-3Gi6eac8j1pg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Oct 9, 2013 at 8:02 PM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> I think the simplest solution would be to have a parameter to initdb
> which specifies how much memory you want to use, and set a new variable
> available_mem from that, and have things auto-tune based on that value
> in the backend.
Have you tried pgtune?
http://pgfoundry.org/projects/pgtune/
It's an extremely simple program - about 500 lines of python. It asks
a few simple questions, and does as good a job of configuring Postgres
as most installations will ever need. Importantly, it asks the user to
characterize their workload as one of the following: DW, OLTP, Web,
Mixed and Desktop. Why can't we just do something along those lines?
I know that I constantly find myself rattling off folk wisdom about
how to set the most important GUCs. I'm not alone here [1]. Isn't it
about time we just automated this?
[1] http://rhaas.blogspot.com/2012/03/tuning-sharedbuffers-and-walbuffers.html
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2013-10-10 03:19:53 | Re: Auto-tuning work_mem and maintenance_work_mem |
Previous Message | Bruce Momjian | 2013-10-10 03:13:14 | Re: Auto-tuning work_mem and maintenance_work_mem |