Re: LwLocks contention

From: Chris Bisnett <cbisnett(at)gmail(dot)com>
To: Michael Lewis <lewis(dot)michaelr(at)gmail(dot)com>
Cc: PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: LwLocks contention
Date: 2022-04-21 12:17:26
Message-ID: CADCOqPw3D=pXSb5TGi6N_Ta9Y0d5QXcGmEcAA4M36T0ry0-gRg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> We are occasionally seeing heavy CPU contention with hundreds of processes active but waiting on a lightweight lock - usually lock manager or buffer mapping it seems. This is happening with VMs configured with about 64 CPUs, 350GBs ram, and while we would typically only have 30-100 concurrent processes, there will suddenly be ~300 and many show active with LwLock and they take much longer than usual. Any suggested options to monitor for such issues or logging to setup so the next issue can be debugged properly?
>
> It has seemed to me that this occurs when there are more than the usual number of a particular process type and also something that is a bit heavy in usage of memory/disk. It has happened on various tenant instances and different application processes as well.
>
> Would/how might the use of huge pages (or transparent huge pages, or OFF) play into this scenario?

I've also been contending with a good bit of lightweight lock
contention that causes performance issues. Most often we see this with
the WAL write lock, but when we get too many parallel queries running
we end up in a "thundering herd" type of issue were the contention for
the lock manager lock consumes significant CPU resources causing the
number of parallel queries to increase as more clients back up behind
the lock contention leading to even more lock contention. When this
happens we have to pause our background workers long enough to allow
the lock contention to reduce and then we can resume our background
workers. When we hit the lock contention it's not a gradual
degredation, it goes immediately from nothing more than 100% CPU
usage. The same is true when reducing the lock contention - it goes
from 100% to nothing.

I've been working under the assumption that this has to do with our
native partitioning scheme and the fact that some queries cannot take
advantage of partition pruning because they don't contain the
partition column. My understanding is that when this happens ACCESS
SHARED locks have to be taken on all tables as well as all associated
resources (indexes, sequences, etc.) and the act of taking and
releasing all of those locks will increase the lock contention
significantly. We're working to update our application so that we can
take advantage of the pruning. Are you also using native partitioning?

- Chris

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Laurenz Albe 2022-04-21 13:35:45 Re: autovacuum_freeze_max_age on append-only tables
Previous Message Thomas, Richard 2022-04-21 09:13:52 RE: PostgreSQL 10.20 crashes / Antivirus