From: | "Ed L(dot)" <pgsql(at)bluepolka(dot)net> |
---|---|
To: | Martijn van Oosterhout <kleptog(at)svana(dot)org> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: DB cache size strategies |
Date: | 2004-02-10 22:20:09 |
Message-ID: | 200402101520.09687.pgsql@bluepolka.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tuesday February 10 2004 1:42, Martijn van Oosterhout wrote:
> I generally give Postgresql about 64-128MB of shared memory, which covers
> all of the system tables and the most commonly used small tables. The
> rest of the memory (this is a 1GB machine) I leave for the kernel to
> manage for the very large tables.
Interesting. Why leave very large tables to the kernel instead of the db
cache? Assuming a dedicated DB server and a DB smaller than available RAM,
why not give the DB enough RAM to get the entire DB into the DB cache?
(Assuming you have the RAM).
Is there a point of diminishing returns for DB cache size *prior* to running
out of data to load into, and access from, that cache? Anyone know how to
guage that point? Maybe shared memory management overhead versus the
performance difference between DB cache reads and OS cache reads? We have
some pretty intense DB loads, but that feels like counting angels on the
head of a pin...
> I think giving postgresql half your memory is couterproductive.
I get the sense it really depends on the nature of the apps running on the
box. If it's a dedicated server (not sharing with an app, a web server,
etc), then giving it enough to get the entire data in memory without
swapping is ideal (without swapping), no matter whether thats 10MB or 10GB
of DB cache. If its a cluster server running along side lots of other apps
and they all generally have the same priority, and assuming disk cache
reads are nearly as fast as DB cache reads, it seems the less DB cache you
have, the better; let all apps leverage the kernel cache equally. Would
you agree?
One troublesome case for which I wonder if there is an easy tweak
unbeknownst to me: a huge table that gets lots of random reads and writes.
It's too big for the cache, and the volume of random queries is such that
its a disk I/O drain. Of course, more memory or better data modeling could
help. Maybe that is the worst case scenario.
Thx.
From | Date | Subject | |
---|---|---|---|
Next Message | scott.marlowe | 2004-02-10 22:21:19 | Re: Join query on 1M row table slow |
Previous Message | scott.marlowe | 2004-02-10 22:20:06 | Re: Join query on 1M row table slow |