From: | Bruce Momjian <bruce(at)momjian(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Peter Geoghegan <pg(at)bowt(dot)ie>, Stephen Frost <sfrost(at)snowman(dot)net>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [HACKERS] Clock with Adaptive Replacement |
Date: | 2018-05-24 16:13:31 |
Message-ID: | 20180524161331.GH11884@momjian.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, May 2, 2018 at 12:27:19PM -0400, Robert Haas wrote:
> I've seen customer have very good luck going higher if it lets all the
> data fit in shared_buffers, or at least all the data that is accessed
> with any frequency. I think it's useful to imagine a series of
> concentric working sets - maybe you have 1GB of the hottest data, 3GB
> of data that is at least fairly hot, 10GB of data that is at least
> somewhat hot, and another 200GB of basically cold data. Increasing
> shared_buffers in a way that doesn't let the next "ring" fit in
> shared_buffers isn't likely to help very much. If you have 8GB of
> shared_buffers on this workload, going to 12GB is probably going to
> help -- that should be enough for the 10GB of somewhat-hot stuff and a
> little extra so that the somewhat-hot stuff doesn't immediately start
> getting evicted if some of the cold data is accessed. Similarly,
> going from 2GB to 4GB should be a big help, because now the fairly-hot
> stuff should stay in cache. But going from 4GB to 6GB or 12GB to 16GB
> may not do very much. It may even hurt, because the duplication
> between shared_buffers and the OS page cache means an overall
> reduction in available cache space. If for example you've got 16GB of
> memory and shared_buffers=2GB, you *may* be fitting all of the
> somewhat-hot data into cache someplace; bumping shared_buffers=4GB
> almost certainly means that will no longer happen, causing performance
> to tank.
I would love to know how we can help people find out how much data is in
each of these rings so they can tune shared buffers accordingly.
--
Bruce Momjian <bruce(at)momjian(dot)us> http://momjian.us
EnterpriseDB http://enterprisedb.com
+ As you are, so once was I. As I am, so you will be. +
+ Ancient Roman grave inscription +
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2018-05-24 16:34:28 | Re: Should we add GUCs to allow partition pruning to be disabled? |
Previous Message | David G. Johnston | 2018-05-24 15:32:37 | Re: computing z-scores |