From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com> |
Cc: | "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_stat_lwlock wait time view |
Date: | 2017-01-10 13:51:49 |
Message-ID: | CA+TgmoYtMAgEzx7jes9psngTc2D3Y5LtWV6aBhLCTp34Lmg9CA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Jan 9, 2017 at 12:13 AM, Haribabu Kommi
<kommi(dot)haribabu(at)gmail(dot)com> wrote:
> Whenever the Backend is waiting for an LWLock, it sends the message to
> "stats collector" with PID and wait_event_info of the lock. Once the stats
> collector receives the message, Adds that Backend entry to Hash table after
> getting the start time. Once the Backend ends the waiting for the Lock, it
> sends the signal to the "stats collector" and it gets the entry from Hash
> table
> and finds out the wait time and update this time to the corresponding LWLock
> entry in another Hash table.
I will be extremely surprised if this doesn't have a severe negative
impact on performance when LWLock contention is high (e.g. a pgbench
read-only test using a scale factor that fits in the OS cache but not
shared_buffers).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2017-01-10 13:52:44 | Re: Logical Replication WIP |
Previous Message | Robert Haas | 2017-01-10 13:48:53 | Re: RustgreSQL |