From: | "Daniel Verite" <daniel(at)manitou-mail(dot)org> |
---|---|
To: | "pgsql-general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Maximum number of exclusive locks |
Date: | 2016-09-13 12:03:28 |
Message-ID: | 07d8bc2e-a501-4f5e-baf8-a7e99e8f4b48@mm |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
When deleting large objects, an exclusive lock is grabbed on each
object individually. As a result, a transaction that does it en
masse can encounter this error:
ERROR: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
I would expect the maximum number of lo_unlink() in the same
transaction to be capped at:
max_locks_per_transaction * (max_connections + max_prepared_transactions)
per documentation:
https://www.postgresql.org/docs/current/static/runtime-config-locks.html
"The shared lock table tracks locks on max_locks_per_transaction *
(max_connections + max_prepared_transactions) objects (e.g., tables);
hence, no more than this many distinct objects can be locked at any
one time"
But in practice, on an otherwise unused 9.5 instance, I've noticed
that this query:
select count(lo_unlink(oid)) from (select oid
from pg_largeobject_metadata limit :LIMIT) s;
with these settings:
max_locks_per_transaction | 512
max_connections | 30
max_prepared_transactions | 5
starts failing at LIMIT=37133, although I'd expect this to
happen, in the best case, at LIMIT=512*(30+5)=17920.
Nothing to complain about, but why would the above formula
underestimate the number of object locks actually available
to a transaction? Isn't it supposed to be a hard cap for such
locks?
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2016-09-13 13:21:04 | Re: Maximum number of exclusive locks |
Previous Message | Vick Khera | 2016-09-13 01:30:59 | Re: Replication Recommendation |