From: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
---|---|
To: | Kevin Old <kold(at)carolina(dot)rr(dot)com> |
Cc: | pgsql <pgsql-general(at)postgresql(dot)org>, pgadmin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] my.cnf to postgresql.conf Conversion |
Date: | 2002-10-31 22:46:59 |
Message-ID: | Pine.LNX.4.33.0210311527280.6618-100000@css120.ihs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-general |
On 31 Oct 2002, Kevin Old wrote:
> Hello everyone,
>
> I have a question about setting the variables in the postgresql.conf
> file. I have the following settings in my my.cnf file for MySQL....and
> need to know which variables in the postgresql.conf file would
> correspond to the ones below, so that I may tune my system correctly.
>
> set-variable = key_buffer=384M
> set-variable = max_allowed_packet=1M
> set-variable = table_cache=64
> set-variable = thread_stack=64K
> set-variable = sort_buffer=2M
> set-variable = record_buffer=2M
> set-variable = thread_cache=8
> # Try number of CPU's*2 for thread_concurrency
> set-variable = thread_concurrency=8
> set-variable = myisam_sort_buffer_size=64M
Many of these have no direct 1 to 1 correlation to Postgresql.
the only ones that are pretty similar are sort_buffer and record_buffer.
The ones to look at changing are first are:
max_connections (the default 32 is kinda small for some servers, but
fine for departmental servers. You may need to adjust fs.file-max in the
kernel (I'm assuming linux here) if you go very high here.)
shared_buffers (These are measured in 8k blocks. I've found anywhere from
1000 to 20000 works well, anything over that tends to be a waste of
memory for most servers. Note you may need to adjust the settings for
kernel.shmall and kernel.shmmax if you want this to be very high).
sort_mem (bigger isn't always better here. This is the amount of memory
EACH sort operation will grab when it runs, and it seems to be that it
will grab all of it whether it needs it or not, so be careful not to crank
this up. I've found that 2048 to 8192 are suitably large (it's measured
in k I believe.)
fsync (Setting this to false can more than double the speed of your
database, while slightly increasing the chance that an unscheduled power
down (i.e. a big dummy trips over your power cord kinda thing) but in my
testing I've never lost data by pulling the plug when running with it set
to false or true.)
then you can look at these settings. they are used to tell the planner
how to execute your query.
random_page_cost (The default of 4 here is a bit high for most people. 1
to 2 seems a more realistic setting for machines with a bit of memory
where a lot of your result set may be cached.)
cpu_tuple_cost
cpu_index_tuple_cost (setting this lower makes the machine favor using
indexes)
cpu_operator_cost
If you are gonna update thousands of rows at a time, look at increasing
these settings, which will let the database recover vacuumed rows from
tables that have had massive updates.
max_fsm_relations
#max_fsm_pages
From | Date | Subject | |
---|---|---|---|
Next Message | Naomi Walker | 2002-11-01 00:36:07 | loading and unloading rows |
Previous Message | Kevin Old | 2002-10-31 22:17:23 | my.cnf to postgresql.conf Conversion |
From | Date | Subject | |
---|---|---|---|
Next Message | Paul Ottar Tornes | 2002-10-31 23:39:22 | limit |
Previous Message | Tom Lane | 2002-10-31 22:33:53 | Re: 7.2.3 / SuSe Linux / S/390 |