From: | Radu Radutiu <rradutiu(at)gmail(dot)com> |
---|---|
To: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Postgresql OOM |
Date: | 2024-06-06 12:25:25 |
Message-ID: | CAG4TxrizOVnkYx1v1a7rv6G3t4fMoZP6vbZn3yPLgjHrg5ETbw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello all,
I have a query that forces an out of memory error, where the OS will kill
the postgresql process.
The query plan (run immediately after a vacuum analyze) is at
https://explain.depesz.com/s/ITQI#html .
PostgreSQL version 16.3, running on RHEL 8.9, 16 vCPU, 64 GB RAM, 32 GB swap
shared_buffers=8G
effective_cache_size=24G
maintenance_work_mem=2G
work_mem=104857kB
default_statistics_target = 100
max_worker_processes = 16
max_parallel_workers_per_gather = 4
max_parallel_workers = 16
max_parallel_maintenance_workers = 4
jit=off
It looks like the excessive memory allocation is reported in
HashSpillContext. I've attached the dump of the memory context for the 5
processes (query + 4 parallel workers) some time after query start. I also
see a huge number of temporary files being created. For the time being I've
set enable_parallel_hash = 'off' and the problem went away.
I've seen a potentially similar problem reported in
https://www.postgresql.org/message-id/flat/20230516200052.sbg6z4ghcmsas3wv%40liskov#f6059259c7c9251fb8c17f5793a2d427
.
Any idea on how to identify the problem? I can reproduce it on demand.
Should I report it pgsql-bugs?
Best regards,
Radu
Attachment | Content-Type | Size |
---|---|---|
mem.txt | text/plain | 61.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Ashutosh Sharma | 2024-06-06 12:29:00 | How about using dirty snapshots to locate dependent objects? |
Previous Message | Dilip Kumar | 2024-06-06 11:58:52 | Re: Compress ReorderBuffer spill files using LZ4 |