From: | "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com> |
---|---|
To: | "Joshua Fielek" <jfielek(at)centriccrm(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Making the most of memory? |
Date: | 2008-01-24 09:19:58 |
Message-ID: | 4798583E.50007@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Joshua Fielek wrote:
> We have an application that has been having some issues with performance
> within postgres 8.1.9 and later 8.2.5. The upgrade to 8.2.5 gained us a
> nice little performance increase just off the improved query
> optimization, but we are still having other performance issues.
What kind of performance issues are you having? A slow query?
What kind of transactions are you running? Read-only? A lot of updates?
How many transactions per minute?
> We have available currently ~4GB (8GB total) for Postgres. We will be
> moving to a server that will have about 24GB (32GB total) available for
> the database, with the current server becoming a hot backup, probably
> with slony or something similar to keep the databases in sync.
>
> I've been monitoring the memory usage of postgres on the current system
> and it seems like none of the threads ever allocate more than about
> 400MB total and about 80-90MB shared memory. It seems to me that since
> we have a very large chunk of memory relative to the database size we
> should be loading the entire database into memory. How can we be sure
> we're getting the most out of the memory we're allocating to postgres?
> What can we do to improve the memory usage, looking for performance
> first and foremost, on both the larger and smaller systems?
How are you measuring the amount of memory used? Which operating system
are you using?
Those numbers don't seem unreasonable to me, though I would've expected
a bit over ~300 MB of shared memory to be used given your shared_buffers
setting.
On a database of ~400MB in size , I doubt you'll ever find use for more
than 1-2 gigs of RAM.
Others have asked about your I/O system, but if the database stays in
memory all the time, that shouldn't matter much. Except for one thing:
fsyncs. Perhaps you're bottlenecked by the fact that each commit needs
to flush the WAL to disk? A RAID array won't help with that, but a RAID
controller with a battery-backed up cache will. You could try turning
fsync=off to test that theory, but you don't want to do that in production.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2008-01-24 14:01:17 | Re: Making the most of memory? |
Previous Message | Steinar H. Gunderson | 2008-01-24 08:05:23 | Re: Making the most of memory? |