From: | David Mullineux <dmullx(at)gmail(dot)com> |
---|---|
To: | Koen De Groote <kdg(dot)dev(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Memory settings when running postgres in a docker container |
Date: | 2024-11-20 22:10:40 |
Message-ID: | CAGsyd8Xf=o=0tL7ttqEWg7mPehyL2e=7Nc+OpoNcpSny-JVB9g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
i dont get why you think all memroy will be used.
When you say
shared_buffers = 16GB
effective_cache_size = 48GB
...then this is using only 16GB for shared buffers.
The effective _cache_size doesn't cause any memory to.be allocated. It's
just a hint to optomizer ....
On Wed, 20 Nov 2024, 11:16 Koen De Groote, <kdg(dot)dev(at)gmail(dot)com> wrote:
> Assuming a machine with:
>
> * 16 CPU cores
> * 64GB RAM
>
> Set to 500 max connections
>
> A tool like this: https://pgtune.leopard.in.ua/
>
> Will output recommended settings:
>
> max_connections = 500
> shared_buffers = 16GB
> effective_cache_size = 48GB
> maintenance_work_mem = 2GB
> checkpoint_completion_target = 0.9
> wal_buffers = 16MB
> default_statistics_target = 100
> random_page_cost = 1.1
> effective_io_concurrency = 200
> work_mem = 8388kB
> huge_pages = try
> min_wal_size = 1GB
> max_wal_size = 4GB
> max_worker_processes = 16
> max_parallel_workers_per_gather = 4
> max_parallel_workers = 16
> max_parallel_maintenance_workers = 4
>
> And they basically use up all the memory of the machine.
>
> 16GB shared buffers, 48GB effective cache size, 8MB of work_mem for some
> reason...
>
> This seems rather extreme. I feel there should be free memory for
> emergencies and monitoring solutions.
>
> And then there's the fact that postgres on this machine will be run in a
> docker container. Which, on Linux, receives 64MB of /dev/shm shared memory
> by default, but can be increased.
>
> I feel like I should probably actually lower my upper limit for memory,
> regardless of what the machine actually has, so I can have free memory, and
> also not bring the container process itself into danger.
>
> Is it as straightforward as putting my limit on, say 20GB, and then giving
> more /dev/shm to the container? Or is there more to consider?
>
> Regards,
> Koen De Groote
>
>
>
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Peter J. Holzer | 2024-11-21 00:42:53 | Re: Help with restoring database from old version of PostgreSQL |
Previous Message | Istvan Soos | 2024-11-20 19:39:44 | Re: A table lock inside a transaction depends on query protocol being used? |