From: | "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Joshua Fielek" <jfielek(at)centriccrm(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Making the most of memory? |
Date: | 2008-01-23 05:20:43 |
Message-ID: | dcc563d10801222120q78e390dbj2b884eca729c2975@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Jan 22, 2008 10:11 PM, Joshua Fielek <jfielek(at)centriccrm(dot)com> wrote:
>
> Hey folks --
>
> For starters, I am fairly new to database tuning and I'm still learning
> the ropes. I understand the concepts but I'm still learning the real
> world impact of some of the configuration options for postgres.
>
> We have an application that has been having some issues with performance
> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a
> nice little performance increase just off the improved query
> optimization, but we are still having other performance issues.
>
> The database itself is not that large -- a db_dump of the sql file as
> text is only about 110MB. I haven't checked the exact size of the actual
> data base, but the entire data directory is smaller than the available
> memory at about 385MB including logs and config files. This is a single
> database with a relatively small number of client connections (50 or so)
> making a fair number of smaller queries. This is not a massive data
> effort by any means at this time, but it will be growing.
>
> We have available currently ~4GB (8GB total) for Postgres.
How are you "allocating" this memory to postgresql? VM, ulimit? Or
are you just saying that you want to tune pgsql to use about 4Gig of
ram?
> We will be
> moving to a server that will have about 24GB (32GB total) available for
> the database, with the current server becoming a hot backup, probably
> with slony or something similar to keep the databases in sync.
>
> I've been monitoring the memory usage of postgres on the current system
> and it seems like none of the threads ever allocate more than about
> 400MB total and about 80-90MB shared memory. It seems to me that since
> we have a very large chunk of memory relative to the database size we
> should be loading the entire database into memory.
You'd think so. But you might be wrong. The OS itself will naturally
cache all of the data in memory anyway. Having PostgreSQL cache it
might as well might make things faster, might make them slower,
depending on your usage patterns.
However, it's far more important that PostgreSQL be able to allocate
memory for individual backends for things like sorts and maintenance
than to use it all to hold mostly static data that may or may not be
accessed all that often.
> How can we be sure
> we're getting the most out of the memory we're allocating to postgres?
I'd suggest not worrying too much about it. Using 100% of your memory
is much more dangerous than not. Since when you run out the machine
will start swapping and slow to a crawl.
> What can we do to improve the memory usage, looking for performance
> first and foremost, on both the larger and smaller systems?
>
> Here's the salient config items for the 8GB system:
>
> max_connections = 200 # realistically we expect 50-150 open
> shared_buffers = 38000
That's a good number for the size database you're currently running.
Having shared_buffers be larger than your data set doesn't really
help. Depending on your workload, having it be smaller can help (i.e.
lots of small transactions).
> sort_mem = 1048576
This setting doesn't exist in 8.1 and 8.2 anymore, it was replaced
with this one:
> work_mem = 32000
Which, by the way, is a pretty reasonable number, except if you're
commonly handling 200 actual connections in which case you could be
allocating 32M*200 = 6.4Gig max if each connection is running a sort
at the same time. If most won't be using that much, you might be
safe.
> maintenance_work_mem = 32000
> max_fsm_pages = 480001 # probably too large for the max_fsm_*
That's ok. it's better to allocate a few hundred thousand extra fsm
pages than not. Since you have to restart to change it, it's better
to be prepared.
> max_fsm_relations = 20000 # items; one Db with ~400 tables.
> effective_cache_size = 212016 # ~2GB, could probably double this
Since effective cache size doesn't allocate anything, but rather acts
as a big round knob telling pgsql about how much memory the OS is
caching postgresql stuff in, you can approximate it.
I'd worry more about what kind of drive subsystem you have in this
system. In a database server the I/O subsystem is often the most
important part of planning for good performance.
From | Date | Subject | |
---|---|---|---|
Next Message | Guillaume Cottenceau | 2008-01-23 08:47:42 | Re: SELECT * FROM table is too slow |
Previous Message | Joshua Fielek | 2008-01-23 04:11:48 | Making the most of memory? |