From: | Joe Conway <mail(at)joeconway(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jason McLaurin <jason(at)jcore(dot)io> |
Cc: | pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Very slow queries followed by checkpointer process killed with signal 9 |
Date: | 2023-04-03 12:33:19 |
Message-ID: | c17da59d-0845-cac7-d1d9-486b46cf0427@joeconway.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 4/2/23 21:40, Tom Lane wrote:
> Jason McLaurin <jason(at)jcore(dot)io> writes:
>> Is there anywhere you'd suggest we start looking for hints? I'd be
>> interested in increasing relevant logging verbosity so that we can see when
>> key background processes are running, both in Postgres core and Timescale.
>
> It might be helpful to try to identify which wait events the slow
> queries are blocking on (pg_stat_activity.wait_event_type and
> .wait_event). I'm not sure if you're going to be able to extract
> useful data, because your query on pg_stat_activity is likely to
> be slow too. But it's a place to start.
>
> Also, given that you're evidently incurring the wrath of the OOM
> killer, you should try to understand why the kernel thinks it's
> under memory pressure. Do you have too many processes, or perhaps
> you've configured too much shared memory?
Given this:
> This is Postgres 14.5 running in the TimescaleDB Docker image.
Possibly the docker image is running with a cgroup memory.limit set?
The OOM killer will trigger on any cgroup limit even if the host has
plenty of free memory.
--
Joe Conway
PostgreSQL Contributors Team
RDS Open Source Databases
Amazon Web Services: https://aws.amazon.com
From | Date | Subject | |
---|---|---|---|
Next Message | Christoph Moench-Tegeder | 2023-04-03 12:59:03 | Re: DEFINER / INVOKER conundrum |
Previous Message | Dominique Devienne | 2023-04-03 11:18:44 | DEFINER / INVOKER conundrum |