From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
Cc: | "James Pang (chaolpan)" <chaolpan(at)cisco(dot)com>, "pgsql-performance(at)lists(dot)postgresql(dot)org" <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: thousands of CachedPlan entry per backend |
Date: | 2023-06-01 07:18:31 |
Message-ID: | CAFj8pRDsJrJGL+nnKLb+1rM5P8gh_XT_bVmDK71k9VgTv8xfzA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi
čt 1. 6. 2023 v 8:53 odesílatel Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
napsal:
> On Thu, 2023-06-01 at 03:36 +0000, James Pang (chaolpan) wrote:
> > PG V14.8-1 , client using Postgresql JDBC driver we found 40MB process
> memory per
> > backend, from Operating system and memorycontext dump “Grand total:”,
> both mached.
> > But from details, we found almost of entry belong to
> “CacheMemoryContext”,
> > from this line CacheMemoryContext: 8737352 total in 42 blocks; 1021944
> free (215 chunks); 7715408 used,
> > but there are thousands of lines of it’s child, the sum of blocks much
> more than “8737352” total in 42 blocks
> >
> > Our application use Postgresql JDBC driver with default
> parameters(maxprepared statement 256),
> > there are many triggers, functions in this database, and a few functions
> run sql by an extension
> > pg_background. We have thousands of connections and have big concern
> why have thousands of entrys
> > of cached SQL ? that will consume huge memory , anyway to limit the
> cached plan entry to save memory
> > consumption? Or it looks like an abnormal behavior or bug to see so
> many cached plan lines.
>
> If you have thousands of connections, that's your problem. You need
> effective connection pooling.
> Then 40MB per backend won't be a problem at all. Having thousands of
> connections will cause
> other, worse, problems for you.
>
> See for example
>
> https://www.cybertec-postgresql.com/en/tuning-max_connections-in-postgresql/
>
> If you want to use functions, but don't want to benefit from plan caching,
> you can set
> the configuration parameter "plan_cache_mode" to "force_custom_plan".
>
The problem with too big of cached metadata can be forced by too long
sessions too.
In this case it is good to throw a session (connect) after 1hour or maybe
less.
Regards
Pavel
>
> Yours,
> Laurenz Albe
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | James Pang (chaolpan) | 2023-06-01 08:50:44 | RE: thousands of CachedPlan entry per backend |
Previous Message | Laurenz Albe | 2023-06-01 06:53:41 | Re: thousands of CachedPlan entry per backend |