From: | "Scott Carey" <scott(at)richrelevance(dot)com> |
---|---|
To: | "Christiaan Willemsen" <cwillemsen(at)technocon(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Configuring for maximum memory usage |
Date: | 2008-10-30 16:18:23 |
Message-ID: | a1ec7d000810300918w666f705eva3f8fb720dfd7aaf@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
You must either increase the memory that ZFS uses, or increase Postgresql
shard_mem and work_mem to get the aggregate of the two to use more RAM.
I believe, that you have not told ZFS to reserve 8GB, but rather told it to
limit itself to 8GB.
Some comments below:
On Thu, Oct 30, 2008 at 8:15 AM, Christiaan Willemsen <
cwillemsen(at)technocon(dot)com> wrote:
> Hi there,
>
> I configured OpenSolaris on our OpenSolaris Machine. Specs:
>
> 2x Quad 2.6 Ghz Xeon
> 64 GB of memory
> 16x 15k5 SAS
>
If you do much writing, and even moreso with ZFS, it is critical to put the
WAL log on a different ZFS volume (and perhaps disks) than the data and
indexes.
>
>
> The filesystem is configured using ZFS, and I think I have found a
> configuration that performs fairly well.
>
> I installed the standard PostgreSQL that came with the OpenSolaris disk
> (8.3), and later added support for PostGIS. All fime.
> I also tried to tune postgresql.conf to maximize performance and also
> memory usage.
>
> Since PostgreSQL is the only thing running on this machine, we want it to
> take full advantage of the hardware. For the ZFS cache, we have 8 GB
> reserved. The rest can be used by postgres.
>
What setting reserves (but does not limit) ZFS to a memory size? I am not
familiar with one that behaves that way, but I could be wrong. Try setting
this to 48GB (leaving 16 for the db and misc).
>
> The problem is getting it to use that much. At the moment, it only uses
> almost 9 GB, so by far not enough. The problem is getting it to use more...
> I hope you can help me with working config.
>
Are you counting both the memory used by postgres and the memory used by the
ZFS ARC cache? It is the combination you are interested in, and performance
will be better if it is biased towards one being a good chunk larger than
the other. In my experience, if you are doing more writes, a larger file
system cache is better, if you are doing reads, a larger postgres cache is
better (the overhead of calling read() in 8k chunks to the os, even if it is
cached, causes CPU use to increase).
>
> Here are the parameters I set in the config file:
>
> shared_buffers = 8192MB
You probably want shared_buffers + the ZFS ARC cache ("advanced" file system
cache for those unfamiliar with ZFS) to be about 56GB, unless you have a lot
of connections and heavily use temp tables or work_mem. In that case make
the total less.
I recommend trying:
shared_buffers = 48GB , ZFS limited to 8GB and
shared_buffers = 8GB, ZFS limited to 48GB
>
> work_mem = 128MB
> maintenance_work_mem = 2048MB
> max_fsm_pages = 204800
> max_fsm_relations = 2000
>
If you do very large aggregates, you may need even 1GB on work_mem.
However, a setting that high would require very careful tuning and reduction
of space used by shared_buffers and the ZFS ARC. Its dangerous since each
connection with a large aggregate or sort may consume a lot of memory.
>
> Database is about 250 GB in size, so we really need to have as much data as
> possible in memory.
>
> I hope you can help us tweak a few parameters to make sure all memory will
> be used.
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2008-10-30 16:46:22 | Re: Configuring for maximum memory usage |
Previous Message | Joshua D. Drake | 2008-10-30 16:05:52 | Re: Configuring for maximum memory usage |