Re: Memory settings when running postgres in a docker container

From: Koen De Groote <kdg(dot)dev(at)gmail(dot)com>
To: David Mullineux <dmullx(at)gmail(dot)com>
Cc: PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Memory settings when running postgres in a docker container
Date: 2024-11-22 20:59:46
Message-ID: CAGbX52FendzTdDgTrs_DqF39Oj6pedigmc4Pu7GXy7VUtPxGVw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Ah, see, I didn't know that.

On Wed, Nov 20, 2024 at 11:10 PM David Mullineux <dmullx(at)gmail(dot)com> wrote:

> i dont get why you think all memroy will be used.
> When you say
> shared_buffers = 16GB
> effective_cache_size = 48GB
>
> ...then this is using only 16GB for shared buffers.
>
> The effective _cache_size doesn't cause any memory to.be allocated. It's
> just a hint to optomizer ....
>
> On Wed, 20 Nov 2024, 11:16 Koen De Groote, <kdg(dot)dev(at)gmail(dot)com> wrote:
>
>> Assuming a machine with:
>>
>> * 16 CPU cores
>> * 64GB RAM
>>
>> Set to 500 max connections
>>
>> A tool like this: https://pgtune.leopard.in.ua/
>>
>> Will output recommended settings:
>>
>> max_connections = 500
>> shared_buffers = 16GB
>> effective_cache_size = 48GB
>> maintenance_work_mem = 2GB
>> checkpoint_completion_target = 0.9
>> wal_buffers = 16MB
>> default_statistics_target = 100
>> random_page_cost = 1.1
>> effective_io_concurrency = 200
>> work_mem = 8388kB
>> huge_pages = try
>> min_wal_size = 1GB
>> max_wal_size = 4GB
>> max_worker_processes = 16
>> max_parallel_workers_per_gather = 4
>> max_parallel_workers = 16
>> max_parallel_maintenance_workers = 4
>>
>> And they basically use up all the memory of the machine.
>>
>> 16GB shared buffers, 48GB effective cache size, 8MB of work_mem for some
>> reason...
>>
>> This seems rather extreme. I feel there should be free memory for
>> emergencies and monitoring solutions.
>>
>> And then there's the fact that postgres on this machine will be run in a
>> docker container. Which, on Linux, receives 64MB of /dev/shm shared memory
>> by default, but can be increased.
>>
>> I feel like I should probably actually lower my upper limit for memory,
>> regardless of what the machine actually has, so I can have free memory, and
>> also not bring the container process itself into danger.
>>
>> Is it as straightforward as putting my limit on, say 20GB, and then
>> giving more /dev/shm to the container? Or is there more to consider?
>>
>> Regards,
>> Koen De Groote
>>
>>
>>
>>
>>
>>
>>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Eric Hanson 2024-11-22 23:23:19 Re: Fwd: A million users
Previous Message Ron Johnson 2024-11-22 20:36:36 Re: Wired behaviour from SELECT