From: | Fabio Pardi <f(dot)pardi(at)portavita(dot)eu> |
---|---|
To: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: OOM Killer kills PostgreSQL |
Date: | 2020-05-20 08:40:21 |
Message-ID: | 7580533b-b418-278d-fdc1-6b892e333b5f@portavita.eu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Maybe your memory budget does not meet the RAM on the machine?
The problem is not in the query you are looking for, but in the settings you are using for Postgres.
regards,
fabio pardi
On 20/05/2020 09:30, Piotr Włodarczyk wrote:
> Hi folks,
>
> We met unexpected PostgreSQL shutdown. After a little investigation we've discovered that problem is in OOM killer which kills our PostgreSQL. Unfortunately we can't find query on DB causing this problem. Log is as below:
>
> May 05 09:05:33 HOST kernel: postgres invoked oom-killer: gfp_mask=0x201da, order=0, oom_score_adj=-1000
> May 05 09:05:34 HOST kernel: postgres cpuset=/ mems_allowed=0
> May 05 09:05:34 HOST kernel: CPU: 0 PID: 28286 Comm: postgres Not tainted 3.10.0-1127.el7.x86_64 #1
> May 05 09:05:34 HOST kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
> May 05 09:05:34 HOST kernel: Call Trace:
> May 05 09:05:34 HOST kernel: [<ffffffffa097ff85>] dump_stack+0x19/0x1b
> May 05 09:05:34 HOST kernel: [<ffffffffa097a8a3>] dump_header+0x90/0x229
> May 05 09:05:34 HOST kernel: [<ffffffffa050da5b>] ? cred_has_capability+0x6b/0x120
> May 05 09:05:34 HOST kernel: [<ffffffffa03c246e>] oom_kill_process+0x25e/0x3f0
> May 05 09:05:35 HOST kernel: [<ffffffffa0333a41>] ? cpuset_mems_allowed_intersects+0x21/0x30
> May 05 09:05:40 HOST kernel: [<ffffffffa03c1ecd>] ? oom_unkillable_task+0xcd/0x120
> May 05 09:05:42 HOST kernel: [<ffffffffa03c1f76>] ? find_lock_task_mm+0x56/0xc0
> May 05 09:05:42 HOST kernel: [<ffffffffa03c2cc6>] out_of_memory+0x4b6/0x4f0
> May 05 09:05:42 HOST kernel: [<ffffffffa097b3c0>] __alloc_pages_slowpath+0x5db/0x729
> May 05 09:05:42 HOST kernel: [<ffffffffa03c9146>] __alloc_pages_nodemask+0x436/0x450
> May 05 09:05:42 HOST kernel: [<ffffffffa0418e18>] alloc_pages_current+0x98/0x110
> May 05 09:05:42 HOST kernel: [<ffffffffa03be377>] __page_cache_alloc+0x97/0xb0
> May 05 09:05:42 HOST kernel: [<ffffffffa03c0f30>] filemap_fault+0x270/0x420
> May 05 09:05:42 HOST kernel: [<ffffffffc03c07d6>] ext4_filemap_fault+0x36/0x50 [ext4]
> May 05 09:05:42 HOST kernel: [<ffffffffa03edeea>] __do_fault.isra.61+0x8a/0x100
> May 05 09:05:42 HOST kernel: [<ffffffffa03ee49c>] do_read_fault.isra.63+0x4c/0x1b0
> May 05 09:05:42 HOST kernel: [<ffffffffa03f5d00>] handle_mm_fault+0xa20/0xfb0
> May 05 09:05:42 HOST kernel: [<ffffffffa098d653>] __do_page_fault+0x213/0x500
> May 05 09:05:42 HOST kernel: [<ffffffffa098da26>] trace_do_page_fault+0x56/0x150
> May 05 09:05:42 HOST kernel: [<ffffffffa098cfa2>] do_async_page_fault+0x22/0xf0
> May 05 09:05:42 HOST kernel: [<ffffffffa09897a8>] async_page_fault+0x28/0x30
> May 05 09:05:42 HOST kernel: Mem-Info:
> May 05 09:05:42 HOST kernel: active_anon:5382083 inactive_anon:514069 isolated_anon:0
> active_file:653 inactive_file:412 isolated_file:75
> unevictable:0 dirty:0 writeback:0 unstable:0
> slab_reclaimable:120624 slab_unreclaimable:14538
> mapped:814755 shmem:816586 pagetables:60496 bounce:0
> free:30218 free_pcp:562 free_cma:0
>
> Can You tell me how to find problematic query? Or how to "pimp" configuration to let db be alive and let us find problematic query?
>
> --
>
> Pozdrawiam
> Piotr Włodarczyk
From | Date | Subject | |
---|---|---|---|
Next Message | Justin Pryzby | 2020-05-20 08:46:38 | Re: OOM Killer kills PostgreSQL |
Previous Message | Piotr Włodarczyk | 2020-05-20 08:28:30 | Re: OOM Killer kills PostgreSQL |