Re: shared_buffers advice

From: Nikolas Everett <nik9000(at)gmail(dot)com>
To: Pierre C <lists(at)peufeu(dot)com>
Cc: Greg Smith <greg(at)2ndquadrant(dot)com>, Dave Crooke <dcrooke(at)gmail(dot)com>, Paul McGarry <paul(at)paulmcgarry(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: shared_buffers advice
Date: 2010-03-16 13:26:15
Message-ID: d4e11e981003160626o6c76d902t16073cabb4dadabc@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Mar 16, 2010 at 7:24 AM, Pierre C <lists(at)peufeu(dot)com> wrote:
>
> I wonder about something, too : if your DB size is smaller than RAM, you
> could in theory set shared_buffers to a size larger than your DB provided
> you still have enough free RAM left for work_mem and OS writes management.
> How does this interact with the logic which prevents seq-scans hogging
> shared_buffers ?

I think the logic you are referring to is the clock sweep buffer accounting
scheme. That just makes sure that the most popular pages stay in the
buffers. If your entire db fits in the buffer pool then it'll all get in
there real fast.

Two things to consider though:
1. The checkpoint issue still stands.
2. You should really mess around with your cost estimates if this is the
case. If you make random IO cost the same as sequential IO postgres will
prefer index scans over bitmap index scans and table scans which makes sense
if everything is in memory.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Pierre C 2010-03-16 13:48:43 Re: shared_buffers advice
Previous Message Pierre C 2010-03-16 11:24:40 Re: shared_buffers advice