From: | "James Pang (chaolpan)" <chaolpan(at)cisco(dot)com> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | RE: huge memory of Postgresql backend process |
Date: | 2022-09-09 08:19:33 |
Message-ID: | PH0PR11MB5191C4B6046CD057D0B15159D6439@PH0PR11MB5191.namprd11.prod.outlook.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Thanks for you suggestion, yes, we reduced partition count from 256 to 128, then to 64. The memory got reduced from 170M to 91MB then to 60M.
It's from Postgresql JDBC client, preparedStatementCacheQueries=(default 256), reduce that from JDBC parameter can help reduce "cached plan context" ?
Thanks,
James
-----Original Message-----
From: David Rowley <dgrowleyml(at)gmail(dot)com>
Sent: Friday, September 9, 2022 5:35 AM
To: James Pang (chaolpan) <chaolpan(at)cisco(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>; PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: huge memory of Postgresql backend process
On Fri, 9 Sept 2022 at 02:08, James Pang (chaolpan) <chaolpan(at)cisco(dot)com> wrote:
> Looks like Postgresql backend server try to allocate one large memory suddenly based on some estimation rule when update/delete from partition tables, and actually not used so much memory. we found that never free back to Operating system after the backend process idle long time. From OS pmap command, almost of memory " xxxxx rw--- [ anon ]". Maybe it's an overestimated memory allocation from OS than it's real usage ?
I see you've been asking this question in various places around the mailing lists.
It seems what you're complaining about is mentioned in our documents under [1]. Namely:
"Another reason to be concerned about having a large number of partitions is that the server's memory consumption may grow significantly over time, especially if many sessions touch large numbers of partitions. That's because each partition requires its metadata to be loaded into the local memory of each session that touches it."
What that section of the documents does not clearly mention is that this "metadata" is kept for the life of the backend process. This is probably implied by the "significantly over time", but maybe we could have been more clear that we don't free the relcache entries.
James, if you're having OOM problems due to this then the solution is to reduce the number of partitions in the partitioned table, or change things so that you don't query as many partitions from as many backends. The number of relcache entries stored for partitions will be number_of_partitions_you_query_in_the_backend * the_number_of_backends. So you'll either want to review the number of partitions or the number of backends (aka max_connections)
David
[1] https://www.postgresql.org/docs/13/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Langote | 2022-09-09 08:41:51 | Re: huge memory of Postgresql backend process |
Previous Message | Kyotaro Horiguchi | 2022-09-09 07:09:37 | Re: BUG #17611: SJIS conversion rule about duplicated characters differ from Windows |