From: | Don Seiler <don(at)seiler(dot)us> |
---|---|
To: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Estimating HugePages Requirements? |
Date: | 2021-06-09 16:41:52 |
Message-ID: | CAHJZqBBLHFNs6it-fcJ6LEUXeC5t73soR3h50zUSFpg7894qfQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin pgsql-hackers |
Good day,
I'm trying to set up a chef recipe to reserve enough HugePages on a linux
system for our PG servers. A given VM will only host one PG cluster and
that will be the only thing on that host that uses HugePages. Blogs that
I've seen suggest that it would be as simple as taking the shared_buffers
setting and dividing that by 2MB (huge page size), however I found that I
needed some more.
In my test case, shared_buffers is set to 4003MB (calculated by chef) but
PG failed to start until I reserved a few hundred more MB. When I checked
VmPeak, it was 4321MB, so I ended up having to reserve over 2161 huge
pages, over a hundred more than I had originally thought.
I'm told other factors contribute to this additional memory requirement,
such as max_connections, wal_buffers, etc. I'm wondering if anyone has been
able to come up with a reliable method for determining the HugePages
requirements for a PG cluster based on the GUC values (that would be known
at deployment time).
Thanks,
Don.
--
Don Seiler
www.seiler.us
From | Date | Subject | |
---|---|---|---|
Next Message | Julien Rouhaud | 2021-06-09 17:23:28 | Re: Estimating HugePages Requirements? |
Previous Message | Laurenz Albe | 2021-06-08 01:19:36 | Re: patroni recovery_min_apply_delay parameter |
From | Date | Subject | |
---|---|---|---|
Next Message | Julien Rouhaud | 2021-06-09 17:23:28 | Re: Estimating HugePages Requirements? |
Previous Message | Robert Haas | 2021-06-09 16:38:36 | Re: Race condition in recovery? |