Assistance with an out of shared memory error

From: Gautam Bellary <gautam(at)pulsasensors(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Assistance with an out of shared memory error
Date: 2022-01-13 21:30:59
Message-ID: CA+MVzH2N65_JzJyC4p8C4EC9k29C=VAi9eAnZdWWFV-co4c0+w@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi Postgres group,

I'm reaching out for some help with an "ERROR: out of shared memory. HINT:
You might need to increase max_locks_per_transaction" issue:

I've got a PL/pgSQL function (regenerate_gateway_last_seen, attached) that
loops through all partitions of 2 tables ("Measure" and "ValuelessMeasure",
schema attached) selecting data from each into another table
("GatewayLastSeenNew"). Occasionally the function runs to completion, but
usually it errors with the message copied below. I'd appreciate any advice
to help understand why I'm seeing this and if increasing
max_locks_per_transaction, changing another configuration value, or
changing how the function works would improve reliability.

At the moment I have three theories as to why this might be happening:
1. Machine is out of memory - this seems unlikely because shared_buffers is
'5400MB' and I don't see the machine take more of the available swap when I
run the function.

2. max_locks_per_transaction is being hit in the transaction - this also
seems unlikely because max_locks_per_transaction is set to the default
value of 64, but there are ~8000 partitions to consider and I expect the
resulting GatewayLastSeenNew table to have thousands of rows. If I was
taking locks that would contribute towards that ceiling of 64 I'd expect
this to fail every time, instead of failing often but not always as I
observe.

3. The max size of the lock table is being exceeded - my understanding is
that the lock table has room for max_locks_per_transaction *
(max_connections + max_prepared_transactions) locks, which would be 64 *
(200 + 0) = 12800 for my current configuration. I used 'SELECT COUNT(*)
FROM PG_LOCKS' while the function was running and observe values as high as
21758, so if this is the issue it seems like I might not be estimating the
max size of the lock table correctly or only specific locks contribute to
that.

Thanks,
Gautam

Error:
SELECT regenerate_gateway_lastseen();
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
CONTEXT: SQL statement "INSERT INTO GatewayLastSeenNew(GatewayId, Time)
SELECT GatewayId, MAX(Time) AS max_time
FROM valuelessmeasure_39137
GROUP BY GatewayId
ON CONFLICT (GatewayId) DO UPDATE SET Time = EXCLUDED.Time WHERE
GatewayLastSeenNew.GatewayId = EXCLUDED.GatewayId AND
GatewayLastSeenNew.Time < EXCLUDED.Time;"
PL/pgSQL function regenerate_gateway_lastseen() line 37 at EXECUTE

SHOW max_locks_per_transaction;
max_locks_per_transaction
---------------------------
64
(1 row)

SHOW max_connections;
max_connections
-----------------
200
(1 row)

SHOW max_prepared_transactions;
max_prepared_transactions
---------------------------
0

SELECT COUNT(*) FROM PG_LOCKS;
count
-------
21758
(1 row)

SELECT COUNT(INHRELID::REGCLASS) FROM PG_CATALOG.PG_INHERITS WHERE
INHPARENT = 'measure'::REGCLASS
count
-------
3672
(1 row)

SELECT COUNT(INHRELID::REGCLASS) FROM PG_CATALOG.PG_INHERITS WHERE
INHPARENT = 'valuelessmeasure'::REGCLASS
count
-------
3672
(1 row)

Attachment Content-Type Size
regenerate_gateway_lastseen.sql application/octet-stream 3.0 KB
table_schema.sql application/octet-stream 934 bytes

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2022-01-13 23:35:55 Re: Assistance with an out of shared memory error
Previous Message Laurenz Albe 2022-01-13 19:22:28 Re: Using FOREIGN TABLE to get the Size of the Actual Remote Table Behind it