From: | Michael Lewis <mlewis(at)entrata(dot)com> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | Michael Curry <curry(at)cs(dot)umd(dot)edu>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: perf tuning for 28 cores and 252GB RAM |
Date: | 2019-06-18 15:48:11 |
Message-ID: | CAHOFxGodbsRDU9Z+AveC1vfspzudGtNtU_4JYMWG-H9zAECjGg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>
> If your entire database can comfortably fit in RAM, I would make
> shared_buffers large enough to hold the entire database. If not, I would
> set the value small (say, 8GB) and let the OS do the heavy lifting of
> deciding what to keep in cache. If you go with the first option, you
> probably want to use pg_prewarm after each restart to get the data into
> cache as fast as you can, rather than let it get loaded in naturally as you
> run queries; Also, you would probably want to set random_page_cost and
> seq_page_cost quite low, like maybe 0.1 and 0.05.
>
In all deference to your status as a contributor, what are these
recommendations based on/would you share the rationale? I'd just like to
better understand. I have never heard a recommendation to set random & seq
page cost below 1 before for instance.
If the entire database were say 1 or 1.5 TBs and ram was on the order of 96
or 128 GBs, but some of the data is (almost) never accessed, would the
recommendation still be the same to rely more on the OS caching? Do you
target a particular cache hit rate as reported by Postgres stats?
>
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2019-06-18 16:06:24 | Re: perf tuning for 28 cores and 252GB RAM |
Previous Message | Moreno Andreo | 2019-06-18 14:15:27 | Re: Connection refused (0x0000274D/10061) |