RE: huge memory of Postgresql backend process

From: "James Pang (chaolpan)" <chaolpan(at)cisco(dot)com>
To: Amit Langote <amitlangote09(at)gmail(dot)com>
Cc: David Rowley <dgrowleyml(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: RE: huge memory of Postgresql backend process
Date: 2022-09-10 10:52:58
Message-ID: PH0PR11MB51917B1D39509F6DD29F4B35D6429@PH0PR11MB5191.namprd11.prod.outlook.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Hi,
With 256 hash partition tables, we got huge backend memory increase with JDBC driver client prepared statements, even when reduce partition count from 256 to 64, 27 partition tables insert/update/delete sql still consumes 60mb. We plan to convert one large partition table to multiple non-partitioned tables, for example , one 256 HASH partition table to 256 small tables. Then application will query small tables directly to get quick sql response time. Does that help reduce backend process memory for "metadata", I can expect to reduce partition related cache of backend process, but the table/index count is same.

Thanks,

James

-----Original Message-----
From: Amit Langote <amitlangote09(at)gmail(dot)com>
Sent: Friday, September 9, 2022 4:42 PM
To: James Pang (chaolpan) <chaolpan(at)cisco(dot)com>
Cc: David Rowley <dgrowleyml(at)gmail(dot)com>; Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>; PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org>
Subject: Re: huge memory of Postgresql backend process

Hi,

On Fri, Sep 9, 2022 at 5:19 PM James Pang (chaolpan) <chaolpan(at)cisco(dot)com> wrote:
> Thanks for you suggestion, yes, we reduced partition count from 256 to 128, then to 64. The memory got reduced from 170M to 91MB then to 60M.
> It's from Postgresql JDBC client, preparedStatementCacheQueries=(default 256), reduce that from JDBC parameter can help reduce "cached plan context" ?

Interesting that you mention "cached plan context".

Cached plans for queries over partitioned tables tend to be big (in terms of memory they take up in the plan cache) because the planner can't use partition pruning. One workaround might be to forcefully prevent plan caching by setting plan_cache_mode to 'force_custom_plan', which makes a backend rebuild the plan on every execution of a prepared statement.

--
Thanks, Amit Langote
EDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message Jaime Casanova 2022-09-10 12:21:06 Re: Bug in UPDATE statement
Previous Message James Pang (chaolpan) 2022-09-10 10:50:34 Recall: huge memory of Postgresql backend process