From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Greg Stark <stark(at)mit(dot)edu>, Peter Geoghegan <pg(at)heroku(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Clock sweep not caching enough B-Tree leaf pages? |
Date: | 2014-04-17 14:55:30 |
Message-ID: | CA+TgmoakQ=B+r5Ru=DJ_c10M8yC8OCc9v6ejZaKFyZ=LxmEJMA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Apr 17, 2014 at 10:48 AM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
>> > I understand now. If there is no memory pressure, every buffer gets the
>> > max usage count, and when a new buffer comes in, it isn't the max so it
>> > is swiftly removed until the clock sweep has time to decrement the old
>> > buffers. Decaying buffers when there is no memory pressure creates
>> > additional overhead and gets into timing issues of when to decay.
>>
>> That can happen, but the real problem I was trying to get at is that
>> when all the buffers get up to max usage count, they all appear
>> equally important. But in reality they're not. So when we do start
>> evicting those long-resident buffers, it's essentially random which
>> one we kick out.
>
> True. Ideally we would have some way to know that _all_ the buffers had
> reached the maximum and kick off a sweep to decrement them all. I am
> unclear how we would do that. One odd idea would be to have a global
> counter that is incremented everytime a buffer goes from 4 to 5 (max)
> --- when the counter equals 50% of all buffers, do a clock sweep. Of
> course, then the counter becomes a bottleneck.
Yeah, I think that's the right general line of thinking. But it
doesn't have to be as coarse-grained as "do a whole clock sweep". It
can be, you know, for every buffer that gets incremented from 4 to 5,
run the clock sweep far enough to decrement the usage count of some
other buffer by one. That's similar to your idea but you can do it a
bit at a time rather than having to make a complete pass over
shared_buffers all at once.
Your other point, that the counter can become the bottleneck, is quite
right also and a major problem in this area. I don't know how to
solve it right at the moment, but I'm hopeful that there may be a way.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2014-04-17 14:58:13 | Re: Clock sweep not caching enough B-Tree leaf pages? |
Previous Message | Bruce Momjian | 2014-04-17 14:48:15 | Re: Clock sweep not caching enough B-Tree leaf pages? |