From: | Chris Hoover <revoohc(at)gmail(dot)com> |
---|---|
To: | "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>, pgsql-performance(at)postgresql(dot)org |
Subject: | Memory Usage Question |
Date: | 2006-01-09 18:54:48 |
Message-ID: | 1d219a6f0601091054q70283291v9deba2cb3c7b8fe@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-performance |
Question,
How exactly is Postgres and Linux use the memory?
I have serveral databases that have multi GB indexes on very large tables.
On our current servers, the indexes can fit into memory but not the data
(servers have 8 - 12 GB). However, my boss is wanting to get new servers
for me but does not want to keep the memory requirements as high as they are
now (this will allow us to get more servers to spread our 200+ databases
over).
Question, if I have a 4GB+ index for a table on a server with 4GB ram, and I
submit a query that does an index scan, does Postgres read the entire index,
or just read the index until it finds the matching value (our extra large
indexes are primary keys).
I am looking for real number to give to my boss the say either having a
primary key larger than our memory is bad (and how to clearly justfify it),
or it is ok.
If it is ok, what are the trade offs in performance?\
Obviously, I want more memory, but I have to prove the need to my boss since
it raises the cost of the servers a fair amount.
Thanks for any help,
Chris
From | Date | Subject | |
---|---|---|---|
Next Message | Jaime Casanova | 2006-01-09 20:22:09 | Re: What happens to transactions durring a pg_dump? |
Previous Message | Tom Lane | 2006-01-09 18:48:20 | Re: postgresql.conf |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-01-09 18:56:50 | Re: 500x speed-down: Wrong query plan? |
Previous Message | Alessandro Baretta | 2006-01-09 18:50:19 | Re: 500x speed-down: Wrong query plan? |