From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Alexander Farber <alexander(dot)farber(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Would my postgresql 8.4.12 profit from doubling RAM? |
Date: | 2012-08-30 14:42:53 |
Message-ID: | CAOR=d=1Ox+A04oNr5BicdGgnghBs0vzgVDAi6Ay9i3KfH+-aDQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Aug 30, 2012 at 5:42 AM, Alexander Farber
<alexander(dot)farber(at)gmail(dot)com> wrote:
> Hello,
>
> I run CentOS 6.3 server with 16 GB RAM and:
> postgresql-8.4.12-1.el6_2.x86_64
> pgbouncer-1.3.4-1.rhel6.x86_64
>
> The modified params in postgresql.conf are:
> max_connections = 100
> shared_buffers = 4096MB
That's probably plenty. Remember Postgresql uses memory in other ways
so handing it all over to shared buffers is not a good idea anyway.
You might consider upping work_mem a bit, depending on your workload.
> Below is a typical top output, the pref.pl is my game daemon:
>
> Mem: 16243640k total, 14091172k used, 2152468k free, 621072k buffers
> Swap: 2096056k total, 0k used, 2096056k free, 8929900k cached
This is the important stuff right here. Note you've got 2G free mem,
and almost 9G of cache. That's good. There's no memory pressure on
your server right now. Let's say that you average about 100 active
users, and currently work_mem is set to 1M (the default.) If you
increase that to 16M, that'd be max 1.6G of memory, which you have
free anyway right now. I've found that small increases of work_mem
into the 8 to 16M zone often increase performance because they allow
bigger queries to fit results into hash_* operations which are a
pretty fast way of joining larger data sets.
If pgbouncer keeps your actual connections in the range of a few dozen
at a time then you can look at going high on work_mem, but often with
workloads like the one you're talking about you won't see any increase
in performance. Once the data set used for hash_* operations fits in
memory, more work_mem won't help.
Note that work_mem is PER op, not per connections, per user or per
database, so it can add up REAL fast, hence why it's so small to start
with (1M). Overdoing it can result in a server that falls to its knees
during heavy load.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2012-08-30 14:45:02 | Re: Would my postgresql 8.4.12 profit from doubling RAM? |
Previous Message | Scott Marlowe | 2012-08-30 14:34:47 | Re: String comparision in PostgreSQL |