From: | Radu Radutiu <rradutiu(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Postgresql OOM |
Date: | 2024-06-07 18:42:58 |
Message-ID: | CAG4Txrj1LcSeq7piJ7KigPgLEfWycLFLRJB0-9tc3JN9-go5zw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jun 7, 2024 at 7:59 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> Hi,
>
> On 2024-06-06 15:25:25 +0300, Radu Radutiu wrote:
> > I have a query that forces an out of memory error, where the OS will kill
> > the postgresql process.
>
> FWIW, it can be useful to configure the OS with strict memory overcommit.
> That
> causes postgres to fail more gracefully, because the OOM killer won't be
> invoked.
>
>
> > The query plan (run immediately after a vacuum analyze) is at
> > https://explain.depesz.com/s/ITQI#html .
>
> Can you get EXPLAIN (ANALYZE, BUFFERS) to complete if you reduce the
> number of
> workers? It'd be useful to get some of the information about the actual
> numbers of tuples etc.
>
>
> Hi,
I've tried first giving more memory to the OS and mounting a tmpfs
in pgsql_tmp. It didn't work, I got
ERROR: invalid DSA memory alloc request size 1140850688
CONTEXT: parallel worker
I've seen around 2 million temporary files created before the crash.
With work_mem 100MB I was not able to get it to work with 2 parallel
workers.
Next, I've increased work_mem to 200MB and now (with extra memory and
tmpfs) it finished: https://explain.depesz.com/s/NnRC
Radu
From | Date | Subject | |
---|---|---|---|
Next Message | Stepan Neretin | 2024-06-07 18:50:02 | Re: Sort functions with specialized comparators |
Previous Message | Neil Conway | 2024-06-07 18:07:36 | Re: Optimizing COPY with SIMD |