From: | "James Pang (chaolpan)" <chaolpan(at)cisco(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | RE: huge memory of Postgresql backend process |
Date: | 2022-09-13 08:50:04 |
Message-ID: | CO1PR11MB51857DF935A6F3085E676846D6479@CO1PR11MB5185.namprd11.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Hi,
Based on this test in our environment, we only use one prepare "PREPARE updmetadata(varchar,varchar,varchar,varchar,bigint,varchar) AS UPDATE xxxxx
SET xxxx = $1,xxxxx = $2,KEYURL = $3, xxxx = $4
WHERE xxxx = $5 AND xxxxx = $6 ($6 and $5 is primary key) , it run fast.
Start a new connection : RES=12894k
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1837663 postgres 20 0 45.5g 16000 11936 S 0.0 0.0 0:00.02 postgres: xxxxxxxxxxx(50048) idle
Create a server prepared statement for this sql ,...run several times , we start to see server side cached statement and memory increased quickly to 72800k
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1837663 postgres 20 0 45.6g 72800 12000 S 0.0 0.1 0:00.17 postgres: xxxxxxxxx(50048) idle
--after that ,more execute prepare statement ,no more changes ...
--adding a new prepare sql statement will increase about several MB bytes.
Server prepared sql statement memory allocation , depends on partition count , for 256 partition count, it initially asking about 60MB memory one time, then several MB memory for following sql statements depends on the SQL statement. Is this kind of memory allocation is expected ? or maybe fine tuned memory allocation for "large connections user case" to avoid out of memory issue?
Another interesting point is , when use psql .. -h localhost , the memory allocation is much less (only several MB) with above test , since local running ok with same SQL and same table, for remote connection, need much more memory instead.
Thanks,
James
-----Original Message-----
From: James Pang (chaolpan)
Sent: Thursday, September 8, 2022 10:08 PM
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: RE: huge memory of Postgresql backend process
When rebuild from partitioned to non-partitioned tables with same data size, same test load only use 24-25MB/per backend server. From auto_explain that show both DELETE and UPDATE use partition pruning , so the sql running very fast. But only see process memory increased quickly , with hundreds of connections to do similar "update/delete" on multiple partition tables test, server physical memory got used up and a lot of "out of memory" error dumped to pglog, but from dumped memory context stats , we only saw tens of MB memory used, instead of 160mb.
Looks like Postgresql backend server try to allocate one large memory suddenly based on some estimation rule when update/delete from partition tables, and actually not used so much memory. we found that never free back to Operating system after the backend process idle long time. From OS pmap command, almost of memory " xxxxx rw--- [ anon ]". Maybe it's an overestimated memory allocation from OS than it's real usage ?
Thanks,
James
-----Original Message-----
From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Sent: Thursday, September 8, 2022 9:48 PM
To: James Pang (chaolpan) <chaolpan(at)cisco(dot)com>
Cc: PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: huge memory of Postgresql backend process
"James Pang (chaolpan)" <chaolpan(at)cisco(dot)com> writes:
> We just switch from Oracle to PGV13, all of these tables got partition (256 HASH partition) , when UPDATE/DELETE ... WHERE . The memory increase verify quickly until >130M. not too much data in these tables. When we switch to PGV14.5 , test again, still see similar memory consumption issue.
I see no particular reason to think this is a bug. If you have a lot of partitions, there is going to be a lot of metadata for them.
Moreover, HASH partitioning is effectively incapable of being pruned, so that every query is going to touch every partition.
(IMO, hash partitioning is basically never a good idea.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Gustafsson | 2022-09-13 08:58:04 | Re: BUG #17612: Error on your site |
Previous Message | Kyotaro Horiguchi | 2022-09-13 03:47:44 | Re: BUG #17611: SJIS conversion rule about duplicated characters differ from Windows |