From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | "James Pang (chaolpan)" <chaolpan(at)cisco(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: huge memory of Postgresql backend process |
Date: | 2022-09-08 21:35:04 |
Message-ID: | CAApHDvqDNVm3iS87FpH6wUa8bOBtH2VVHb2VJvJidna96PCaHw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Fri, 9 Sept 2022 at 02:08, James Pang (chaolpan) <chaolpan(at)cisco(dot)com> wrote:
> Looks like Postgresql backend server try to allocate one large memory suddenly based on some estimation rule when update/delete from partition tables, and actually not used so much memory. we found that never free back to Operating system after the backend process idle long time. From OS pmap command, almost of memory " xxxxx rw--- [ anon ]". Maybe it's an overestimated memory allocation from OS than it's real usage ?
I see you've been asking this question in various places around the
mailing lists.
It seems what you're complaining about is mentioned in our documents
under [1]. Namely:
"Another reason to be concerned about having a large number of
partitions is that the server's memory consumption may grow
significantly over time, especially if many sessions touch large
numbers of partitions. That's because each partition requires its
metadata to be loaded into the local memory of each session that
touches it."
What that section of the documents does not clearly mention is that
this "metadata" is kept for the life of the backend process. This is
probably implied by the "significantly over time", but maybe we could
have been more clear that we don't free the relcache entries.
James, if you're having OOM problems due to this then the solution is
to reduce the number of partitions in the partitioned table, or change
things so that you don't query as many partitions from as many
backends. The number of relcache entries stored for partitions will be
number_of_partitions_you_query_in_the_backend *
the_number_of_backends. So you'll either want to review the number of
partitions or the number of backends (aka max_connections)
David
[1] https://www.postgresql.org/docs/13/ddl-partitioning.html#DDL-PARTITIONING-DECLARATIVE-BEST-PRACTICES
From | Date | Subject | |
---|---|---|---|
Next Message | David Rowley | 2022-09-08 21:42:19 | Re: huge memory of Postgresql backend process |
Previous Message | Tom Lane | 2022-09-08 18:36:47 | Re: BUG #17610: Use of multiple composite types incompatible with record-typed function parameter |