From: | Rambabu g <rambabu(dot)g2564(at)gmail(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: HIGH IO and Less CPU utilization |
Date: | 2022-03-30 04:47:38 |
Message-ID: | CADdgF6evsukuCYogXSBk1K0eJbwqk46+4d_FJ7qXmQ30vrpMnQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Justin,
Only one query is causing the issue, sharing the def of indexes. Please
have a look.
On Wed, 30 Mar 2022 at 01:09, Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> On Wed, Mar 30, 2022 at 12:52:05AM +0530, Rambabu g wrote:
> > > What indexes are defined on this table ?
> > > How large are they ?
> >
> > There are three indexes defined on the table, each one is around 20 to
> 25GB
> > and the indexes is create on
>
> Did you mean to say something else after "on" ?
>
> Show the definition of the indexes from psql \d
>
Index Definition :
postgres=# \d+ idx_empno
Index "l2.pd_activity_empi"
Column | Type | Key? | Definition | Storage | Stats
target
--------+-------------------------+------+------------+----------+--------------
empno | character varying(2000) | yes | empno | extended |
btree, for table "emp"
postgres=# \d+ id_dt
Index "dt"
Column | Type | Key? | Definition | Storage | Stats
target
--------+-----------------------------+------+------------+---------+--------------
dt | timestamp without time zone | yes | dt | plain |
btree, for table "emp"
postgres=# \d+ idx_tp
Index "idx_tp"
Column | Type | Key? | Definition | Storage | Stats
target
--------+-------------------------+------+------------+----------+--------------
tp | character varying(2000) | yes | tp | extended |
btree, for table "emp"
Query is been running for 30min.
> postgres=# explain select distinct empno from emp where sname='test'
> and tp='EMP NAME 1'
>
> Is this the only query that's performing poorly ?
> You should send explain (analyze,buffers) for the prolematic queries.
>
postgres=# select pid,(now()-query_start) as
age,wait_event_type,wait_event,query from pg_stat_activity where
state!='idle';
pid | age | wait_event_type | wait_event |
query
-------+-----------------+-----------------+---------------+-------------------------------------------------------------------------------------------------------------------
32154 | 00:09:56.131136 | IPC | ExecuteGather | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
847 | 00:09:56.131136 | IO | DataFileRead | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
848 | 00:09:56.131136 | IO | DataFileRead | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
849 | 00:09:56.131136 | IO | DataFileRead | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
850 | 00:09:56.131136 | IO | DataFileRead | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
851 | 00:09:56.131136 | IO | DataFileRead | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
852 | 00:09:56.131136 | IO | DataFileRead | explain
analyze select distinct empno from emp where sname='test' and tp='EMP
NAME 1'
645 | 00:00:00 | | | select
pid,(now()-query_start) as age,wait_event_type,wait_event,query from
pg_stat_activity where state!='idle'
postgres=# SELECT COUNT(nullif(isdirty,'f')) dirty, COUNT(1) as all,
COALESCE(c.relname, b.relfilenode::text) FROM pg_buffercache b LEFT JOIN
pg_class c ON b.relfilenode=pg_relation_filenode(c.oid) GROUP BY 3 ORDER BY
1 DESC,2 DESC LIMIT 9;
dirty | all | coalesce
-------+---------+---------------------------------
32 | 136 | fn_deployment
18 | 176 | fn_deployment_key
8 | 12 | event_logs_pkey
6 | 157 | event_logs
1 | 355 | pg_class
0 | 2890261 |
0 | 252734 | utput_status
0 | 378 | emp
0 | 299 | 1249
(9 rows)
-bash-4.2$ sar
Linux 3.10.0-1160.59.1.el7.x86_64 (ip-10-54-145-108.ec2.internal)
03/30/2022 _x86_64_ (24 CPU)
12:00:01 AM CPU %user %nice %system %iowait %steal
%idle
12:10:01 AM all 1.19 0.00 0.82 36.17 0.00
61.81
12:20:01 AM all 0.72 0.00 0.75 35.59 0.00
62.94
12:30:01 AM all 0.74 0.00 0.77 35.04 0.00
63.46
12:40:02 AM all 0.74 0.00 0.76 34.65 0.00
63.85
12:50:01 AM all 0.77 0.00 0.78 33.36 0.00
65.09
01:00:01 AM all 0.83 0.00 0.78 27.46 0.00
70.93
01:10:01 AM all 0.85 0.00 0.78 30.11 0.00
68.26
01:20:01 AM all 0.70 0.00 0.61 20.46 0.00
78.24
01:30:01 AM all 0.15 0.00 0.06 0.02 0.00
99.77
01:40:01 AM all 0.14 0.00 0.05 0.00 0.00
99.80
01:50:01 AM all 0.14 0.00 0.05 0.00 0.00
99.80
02:00:01 AM all 0.15 0.00 0.06 0.00 0.00
99.78
02:10:01 AM all 0.14 0.00 0.05 0.00 0.00
99.80
02:20:01 AM all 0.14 0.00 0.05 0.00 0.00
99.81
02:30:01 AM all 0.15 0.00 0.06 0.00 0.00
99.80
02:40:01 AM all 0.14 0.00 0.05 0.00 0.00
99.80
02:50:01 AM all 0.14 0.00 0.05 0.00 0.00
99.80
03:00:01 AM all 0.14 0.00 0.05 0.00 0.00
99.80
03:10:01 AM all 0.14 0.00 0.05 0.00 0.00
99.81
03:20:01 AM all 0.14 0.00 0.05 0.00 0.00
99.81
03:30:01 AM all 0.23 0.00 0.15 2.18 0.00
97.44
03:40:01 AM all 1.16 0.00 0.87 22.76 0.00
75.21
03:50:01 AM all 0.75 0.00 0.60 13.89 0.00
84.76
04:00:01 AM all 1.13 0.00 0.87 22.75 0.00
75.26
04:10:01 AM all 0.87 0.00 0.79 22.91 0.00
75.43
04:20:01 AM all 0.71 0.00 0.71 22.07 0.00
76.50
Average: all 0.50 0.00 0.41 13.81 0.00
85.28
-bash-4.2$ iostat
Linux 3.10.0-1160.59.1.el7.x86_64 (ip-.ec2.internal) 03/30/2022 _x86_64_ (24
CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.44 0.00 0.34 13.35 0.00 85.86
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
nvme1n1 1370.20 54514.54 4964.18 7297971937 664565000
nvme2n1 0.92 0.12 223.19 16085 29878260
nvme0n1 0.30 5.12 5.23 685029 699968
-bash-4.2$ iostat -d
Linux 3.10.0-1160.59.1.el7.x86_64 (ip-ec2.internal) 03/30/2022 _x86_64_ (24
CPU)
Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
nvme1n1 1370.25 54518.06 4963.95 7298793425 664565248
nvme2n1 0.92 0.12 223.17 16085 29878260
nvme0n1 0.30 5.12 5.23 685029 699968
-bash-4.2$ free -g
total used free shared buff/cache
available
Mem: 92 1 0 2 90
87
Swap: 0 0 0
>
> > > > Hypervisor vendor: KVM
> > >
> > > Are KSM or THP enabled on the hypervisor ?
>
> > No, the Ec2 VM is delicate to postgres DB instances only.
>
> Oh, so this is an EC2 and you cannot change the hypervisor itself.
>
> > -bash-4.2$ tail /sys/kernel/mm/ksm/run
> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag
> /sys/kernel/mm/transparent_hugepage/enabled
> /sys/kernel/mm/transparent_hugepage/defrag
> ...
> > ==> /sys/kernel/mm/transparent_hugepage/defrag <==
> > [always] madvise never
>
I doubt it will help, but you could try disabling these.
> It's a quick experiment anyway.
>
Disable THP
-bash-4.2$ tail /sys/kernel/mm/ksm/run
/sys/kernel/mm/transparent_hugepage/khugepaged/defrag
/sys/kernel/mm/transparent_hugepage/enabled
/sys/kernel/mm/transparent_hugepage/defrag
==> /sys/kernel/mm/ksm/run <==
0
==> /sys/kernel/mm/transparent_hugepage/khugepaged/defrag <==
1
==> /sys/kernel/mm/transparent_hugepage/enabled <==
always madvise [never]
==> /sys/kernel/mm/transparent_hugepage/defrag <==
always madvise [never]
Regards,
Rambabu.
From | Date | Subject | |
---|---|---|---|
Next Message | Shai Shapira | 2022-03-30 10:42:16 | RE: High process memory consumption when running sort |
Previous Message | Justin Pryzby | 2022-03-29 19:39:41 | Re: HIGH IO and Less CPU utilization |