From: | "Campbell, Lance" <lance(at)illinois(dot)edu> |
---|---|
To: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | memory question |
Date: | 2010-03-25 00:49:10 |
Message-ID: | B10E6810AC2A2F4EA7550D072CDE876004B2E200@SAB-FENWICK.sab.uiuc.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
PostgreSQL 8.4.3
Linux Redhat 5.0
Question: How much memory do I really need?
From my understanding there are two primary strategies for setting up
PostgreSQL in relationship to memory:
1) Rely on Linux to cache the files. In this approach you set the
shared_buffers to a relatively low number.
2) You can set shared_buffers to a very large percentage of your
memory so that PostgreSQL reserves the memory for the database.
I am currently using option #1. I have 24 Gig of memory on my server
and the database takes up 17 Gig of disk space. When I do the Linux
command "top" I notice that 19 Gig is allocated for cache. Is there a
way for me to tell how much of that cache is associated with the caching
of database files?
I am basically asking how much memory do I really need? Maybe I have
complete over kill. Maybe I am getting to a point where I might need
more memory.
My thought was I could use option #2 and then set the number to a lower
amount. If the performance is bad then slowly work the number up.
Our server manager seems to think that I have way to much memory. He
thinks that we only need 5 Gig. I don't really believe that. But I
want to cover myself. With money tight I don't want to be the person
who is wasting resources. We need to replace our database servers so I
want to do the right thing.
Thanks,
Lance Campbell
Software Architect/DBA/Project Manager
Web Services at Public Affairs
217-333-0382
From | Date | Subject | |
---|---|---|---|
Next Message | Eger, Patrick | 2010-03-25 00:59:33 | Re: Forcing index scan on query produces 16x faster |
Previous Message | Robert Haas | 2010-03-25 00:46:49 | Re: Forcing index scan on query produces 16x faster |