From: | Greg Smith <greg(at)2ndquadrant(dot)com> |
---|---|
To: | John Moran <johnfrederickmoran(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Larger volumes of chronologically ordered data and the planner |
Date: | 2010-03-03 21:24:25 |
Message-ID: | 4B8ED389.7040705@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
John Moran wrote:
> Is postgreSQL intelligent enough to discern that
> since the most frequently accessed data is invariably recent data,
> that it should store only that in memory, and efficiently store less
> relevant, older data on disk
When you ask for a database block from disk, it increments a usage count
figure for that block when it's read into memory, and again if it turns
out it was already there. Those requests to allocate new blocks are
constantly decreasing those usage counts as they "clock sweep" over the
cache looking for space that hasn't been used recently. This will
automatically keep blocks you've used recently in RAM, while evicting
ones that aren't.
The database doesn't have any intelligence to determining what data to
keep in memory or not beyond that. Its sole notion of "relevant" is
whether someone has accessed that block recently or not. The operating
system cache will sit as a second layer on top of this, typically with
its own LRU scheme typically for determining what gets cached or not.
I've written a long paper covering the internals here named "Inside the
PostgreSQL Buffer Cache" at
http://www.westnet.com/~gsmith/content/postgresql/ if you want to know
exactly how this is all implemented.
--
Greg Smith 2ndQuadrant US Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.us
From | Date | Subject | |
---|---|---|---|
Next Message | Igor Neyman | 2010-03-03 21:31:45 | Re: Auto VACUUM |
Previous Message | Greg Smith | 2010-03-03 21:08:39 | Re: Raid 10 settings for optimal postgres performance? |