From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
---|---|
To: | Christiaan Willemsen <cwillemsen(at)technocon(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Configuring for maximum memory usage |
Date: | 2008-10-30 16:05:52 |
Message-ID: | 1225382752.32621.4.camel@jd-laptop.pragmaticzealot.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, 2008-10-30 at 16:58 +0100, Christiaan Willemsen wrote:
> Joshua D. Drake wrote:
> >
> > PostgreSQL is only going to use what it needs. It relies on the OS for
> > much of the caching etc...
> >
> So that would actually mean that I could raise the setting of the ARC
> cache to far more than 8 GB? As I said, our database is 250 GB, So I
> would expect that postgres needs more than it is using right now...
> Several tables have over 500 million records (obviously partitioned).
> At the moment we are doing queries over large datasets, So I would
> assume that postgress would need a bit more memory than this..
Well I actually can't answer this definitely. My knowledge of Solaris is
slimmer than my knowledge of other operating systems. However it appears
from a brief google that ARC cache is some predefined file level cache
that you can set with Solaris? If so, then you want that to be high
enough to keep your most active relations hot in cache.
Remember that PostgreSQL doesn't cache anything on its own so if you do
want to hit disk it has to be in file cache.
> >
> > You are missing effective_cache_size. Try setting that to 32G.
> That one was set to 24 GB. But this setting only tells posgres how much
> caching it can expect from the OS? This is not actually memory that it
> will allocate, is it?
That is correct it is not an actual allocation but it does vastly effect
your query plans. PostgreSQL uses this parameter to help determine if a
someone is likely to be cached (see comment about file cache above).
> >
> > You also didn't mention checkpoint_segments (which isn't memory but
> > still important) and default_statistics_target (which isn't memory but
> > still important).
> >
> is at the moment set to:
>
> checkpoint_segments = 40
> default_statistics_target is set to default (I think that is 10)
>
10 is likely way too low. Try 150 and make sure you analyze after.
As I recall some other databases allow you to say, "you have this much
memory, use it". PostgreSQL doesn't do that. You just give it pointers
and it will use as much as it needs within the limits of the OS. The key
word here is needs.
There is obviously some variance to that (like work_mem).
Joshua D. Drake
--
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2008-10-30 16:18:23 | Re: Configuring for maximum memory usage |
Previous Message | Christiaan Willemsen | 2008-10-30 15:58:01 | Re: Configuring for maximum memory usage |