From: | Scottix <scottix(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Optimizing Database High CPU |
Date: | 2019-02-27 18:08:43 |
Message-ID: | CANKFHZ9oXEvjdvd7AE8Uhc--et0Oo2muj5jLDv1NOxgqjssuXQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi we are running a Postgresql Database 9.4.18 and we are noticing a
high CPU usage. Nothing is critical at the moment but if we were to
scale up more of what we are doing, I feel we are going to run into
issues.
It is a 2 x 6 core machine, 128GB ram, Raid 10 HDD
The iostat metrics for the HDD look minimal < 10% util
Available memory seems to be good.
The CPU utilization is what bothering me
user 5-7%
sys 50-70% - seems high
wa <0.5%
So trying to troubleshoot possible high cpu:
Number of concurrent connections averages 50 to 100 - seems high
although we max at 200.
No long running queries
Streaming replication to backup server
High update tables - we have about 4 tables that have a high volume of updates
High update rate is what I am thinking is causing the issue and I
found possibly setting fillfactor to a lower default which the
internet says you need to do a vacuum full which I am trying to avoid
but if it needs to be done we can schedule it. Just want to make sure
if I am chasing the correct rabbit hole.
Are there any statistics I could run to see if a setting change would help.
Best,
Scott
--
T: @Thaumion
IG: Thaumion
Scottix(at)Gmail(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Eilert | 2019-02-27 18:39:10 | Re: cannot execute VACUUM during recovery |
Previous Message | Justin Pryzby | 2019-02-27 18:06:53 | Re: query logging of prepared statements |