Re: Heavy LWLockTranche buffer_mapping in Postgres 9.6.10 server

From: Purav Chovatia <puravc(at)gmail(dot)com>
To: Justin Lu <justin(dot)lu5432(at)gmail(dot)com>
Cc: pgsql-admin(at)lists(dot)postgresql(dot)org, pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: Heavy LWLockTranche buffer_mapping in Postgres 9.6.10 server
Date: 2020-02-02 09:19:41
Message-ID: CADrzpjH8+8t8hKVG-SbKvR7OxS+cxSb=YjeRjAhkfquN3iObLQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

believe, Transparent Huge Pages is disabled and Huge Pages are enabled to
the tune of shared_buffers.

HTH

On Sun, 2 Feb 2020 at 04:46, Justin Lu <justin(dot)lu5432(at)gmail(dot)com> wrote:

> Experts,
>
> We are seeing very heavy LWLockTranche buffer_mapping in db recently.
>
> There server had 24 core, 128GB of RAM, SSD data file system, on Unbuntu
> 16.04.6.
> The shared_buffers was at 32GB. 1/4 of over RAM size. No issue on
> checkpoints (avg time 29 min apart).
>
> After seeing the heavy wait, we added 64GB more RAM and increased
> shared_buffers to 48GB, effective_cache_size to 90GB. But it seems there is
> no impact on the buffer mapping waits at all.
>
> There doesn't seem to be big changes in the db load pattern, but since the
> heavy buffer mapping appeared, the cpu load jumped 3 - 4 times.
>
> file system I/O times/timing has been increasing as the load on the db
> increased over the last few months. But not by any means jumped.
>
> temp files generated by user sorting activities also have been increasing,
> and increase of the work_mem after adding RAM doesn't seem to help much on
> this.
>
> Any suggestions on what to look for?
>
> thanks,
>
> Justin
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Purav Chovatia 2020-02-02 09:24:07 Re: Heavy LWLockTranche buffer_mapping in Postgres 9.6.10 server
Previous Message Justin Lu 2020-02-01 23:17:13 Heavy LWLockTranche buffer_mapping in Postgres 9.6.10 server