AW: Out of memory in big transactions after upgrade to 12.2

From: Jan Strube <Jan(dot)Strube(at)solvians(dot)com>
To: Michael Lewis <mlewis(at)entrata(dot)com>, David Day <djd2403(at)gmail(dot)com>
Cc: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: AW: Out of memory in big transactions after upgrade to 12.2
Date: 2020-04-06 12:12:52
Message-ID: 4a81fed68f92466da1be568e21c30b99@MSEX-04-FRA9.corp.solvians.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Thanks a lot so far for all your answers. work_mem is 4 MB and max_connections is 100, no pooling.

As additional info maybe I should mention that we do an update on one table which cascades to some other tables updating 10ks of rows there and triggering the Perl functions for every row. I’m not sure this was clear from the stack trace.

Von: Michael Lewis [mailto:mlewis(at)entrata(dot)com]
Gesendet: Freitag, 3. April 2020 18:30
An: David Day <djd2403(at)gmail(dot)com>
Cc: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>; Jan Strube <Jan(dot)Strube(at)solvians(dot)com>; pgsql-general(at)lists(dot)postgresql(dot)org
Betreff: Re: Out of memory in big transactions after upgrade to 12.2

If you didn't turn it off, you have parallel workers on by default with v12. If work_mem is set high, memory use may be much higher as each node in a complex plan could end up executing in parallel.

Also, do you use a connection pooler such as pgbouncer or pgpool? What is max_connections set to?

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Konireddy Rajashekar 2020-04-06 13:02:36 Logical replication
Previous Message Andrus 2020-04-06 08:59:05 Re: How to prevent master server crash if hot standby stops