Re: Enough RAM for entire Database.. cost aside, is this

From: Shridhar Daithankar <shridhar(at)frodo(dot)hserus(dot)net>
To: Andy B <abhousehuntRE-M--O--V-E(at)blueyonder(dot)co(dot)uk>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Enough RAM for entire Database.. cost aside, is this
Date: 2004-07-08 08:57:29
Message-ID: 40ED0C79.9060709@frodo.hserus.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

Andy B wrote:
> 1. Postgresql is a two tiered cache mechanism. The first tier - the
> postgresql shared buffer cache sits on the second, larger tier, the linux
> buffer cache. So bits of the same data end up being in memory...twice, and
> two cache mechanisms operate at the same time. (That's how I understand it).

That is correct. But I would advise you to see shared buffers as workspace
rather than cache.
>
> 2. Even if the linux buffer cache contains all the data required for an
> execution of a plan, there is still a load of memory copying to do between
> these two tiers. Though memory copying is faster than disk access, it is
> still an overhead, and isn't there the real problem of thrashing between
> these two tiers if the plan can't fit all the data into the top tier, even
> if the thrashing is restricted to the memory system?

That is certainly not correct. I don't have hard sources to back it up, but if
you open a file that you jus close it, linux does not go copying it from it's
cache to the process address space. It would rather juggle the page table to mak
e memory pages available to your process.

By that argument, there would be three caches. Postgresql shared buffers, files
mapped into process address space and linux buffers. I think that defeats the
purpose of caching.

> 3. The OS will implement a 'Least recently Used' cache replacement strategy
> on data in its cache. This isn't the optimal cache replacement strategy for
> the database. If postgresql is in charge of all its RAM resident data, it
> can make more intelligent decisions about which stuff isn't needed once
> used.

There are many things to note about approach postgresql adopts

- If the machine is shared between multiple services such as web server,
database and mail server etc., this approach make postgresql good citizen by
letting the OS decide which is most hard hit app. that needs memory.
- On a dedicated machine, this approach anyway yields almost entire buffer cache
to postgresql
- Getting caching right w.r.t execution is a hrd thing to do. Postgresql still
has some room to go before it can claim to be making best use of all available
information in query plan. Till that time, taking over file system caching is
not such a good idea. Hackers are already working on making postgresql planner
smarter than it already is.
- Postgresql or no postgresql, OS has to manage buffer cache. Why duplicate the
efforts which somebody is already better at? If OS improves, everybody benefits.
I can recall quite a few posts benefitting from moving to 2.6.x kernel from 2.4.x.

Again, shared buffers is not cache. It is a workspace. Leave it around 64MB, no
matter how big your data set is.

> 1. If the planner *knew* (rather than guessed) that all the data was
> effectively 'there' in RAM, in the right place (i.e. its workspace),
> wouldn't it make choosing the optimal plan easier? (I see that
> effective_buffer_cache is a stab in that direction, but just because
> postgresql guesses the data will have been cached by linux, it doesn't mean
> it actually is - surely an important distinction.)

Not with current planner. Please correct me if I am wrong. And given the
assumption that OS buffer cache->postgresql shared buffers is a low/no cost
operation, this is not an optimization that would win high priority on TODO list.

> 2. You'd avoid a whole layer of caching, along with the not necessarily
> aligned memory copies and other overheads that this introduces. Even in the
> optimal case where all the data needed does reside in RAM, it's not in the
> right bit of RAM. (I may have misunderstood this relationship between the
> shared buffer cache and the linux buffer cache - if no memory copying
> actually occurs - then I'll go away for a bit!)

I believe there is no memory copying. At the most page table juglary at kernel
level and mmaps at libc level. But again this is a guess.
>
> Two interesting things I dug up today:
>
> www.linuxjournal.com/article.php?sid=5482 (to do with a high performance DB
> living in an OS controlled environment)
>
> and
>
> http://www.ixora.com.au/tips/avoid_buffered_io.htm (to do with Oracle
> tuning)
>
> The way things are, I see that postgresql will 'gracefully degrade' in the
> very common scenario where the database shares a machine with limited
> resources and lots of processes, all of them hammering the machine, but in
> my 'optimal RAM optimised db server ' scenario, the scheme seems actually to
> reduce the potential for a blistering system.

I would say it asymptotically approaches the best performance.

> So given all of that, I can't help swinging back to my original question
> about whether it's a bad thing to make the shared buffer cache huge, so long
> as you have much more RAM than the size you choose.

Well, best thing I can tell you is test yourself and find out the sweet spot. It
should be between 64-128MB by general wisdom. If you find otherwise, we would be
more than glad to hear.

I read another thread where you posted your actual requirements etc. I would be
inclined to split it into multiple database servers with application making
decision how to consolidate data from various database servers. It might take
some development but could save you lot on hardware.

Just a thought..

HTH

Shridhar

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Oliver Elphick 2004-07-08 10:09:21 Re: pam authentification trouble ...
Previous Message Ananthajothi 2004-07-08 07:53:47 number of pgsql connections established.