Re: Request for inputs on #6208 - MaxQueuePool Issue

From: Dave Page <dpage(at)pgadmin(dot)org>
To: Akshay Joshi <akshay(dot)joshi(at)enterprisedb(dot)com>
Cc: Yogesh Mahajan <yogesh(dot)mahajan(at)enterprisedb(dot)com>, pgadmin-hackers <pgadmin-hackers(at)postgresql(dot)org>, Nikhil Mohite <nikhil(dot)mohite(at)enterprisedb(dot)com>
Subject: Re: Request for inputs on #6208 - MaxQueuePool Issue
Date: 2023-08-11 09:10:44
Message-ID: CA+OCxozty7ZTZnmND0_+iHoV7yvmce2Dt7vV+eBrkHAD8L5UDQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgadmin-hackers

On Wed, 9 Aug 2023 at 08:25, Akshay Joshi <akshay(dot)joshi(at)enterprisedb(dot)com>
wrote:

> Hi Yogesh
>
> On Tue, Aug 8, 2023 at 1:48 PM Yogesh Mahajan <
> yogesh(dot)mahajan(at)enterprisedb(dot)com> wrote:
>
>> Hi Hackers,
>>
>> For the #6208 <https://github.com/pgadmin-org/pgadmin4/issues/6208>, below
>> are the observations about this issue -
>>
>> On each API request, an application db connection is created with state
>> 'idle in transaction'. Connection state is changed to idle only after a
>> successful response from the web server. If an exception occurs while
>> processing a request which is not handled and response is not sent, the
>> application db connection remains orphaned. This connection is only reset
>> on application restart.
>>
>> Issue - In case of pgAdmin, if the user openes 15 query tool tabs & all
>> of them have long running transactions like pg_sleep(), then opening new
>> query tool/or any operation on pgAdmin which hits API request to backend
>> will throw 'QueuePool Limit reached' error. (Because by default, SQLAlchemy
>> allows 15 connections total: 5 connections in pool & 10 in overflow and
>> pgAdmin uses default setting.) OR if the user executes a query in the
>> query tool & while query execution is in progress, hits F5(keyboard
>> shortcut to execute query) 16 times continuously, then pgAdmin throws
>> 'QueuePool Limit reached' error. After that, the query tool gives incorrect
>> responses to the queries.
>>
>> Solutions -
>> 1.Provide configurable settings for 'pool_size' & 'max_overflow'
>> parameters for SQLAlchemy.
>> 2.Disable pooling using NullPool.A Pool which does not pool connections.
>> Instead it literally opens and closes the underlying DB-API connection per
>> each connection open/close. Using NullPool may impact the performance.
>>
>> What approach should be followed to fix the issue?
>>
>
> If NullPool *may* impact the performance then we should go with
> Solution 1.
>

Yes, with a much larger default value I would suggest.

Plus, we should also stop leaving orphaned connections behind...

--
Dave Page
Blog: https://pgsnake.blogspot.com
Twitter: @pgsnake

EDB: https://www.enterprisedb.com

In response to

Responses

Browse pgadmin-hackers by date

  From Date Subject
Next Message Yogesh Mahajan 2023-08-11 09:14:16 Re: Request for inputs on #6208 - MaxQueuePool Issue
Previous Message Dave Page 2023-08-11 08:51:34 Re: Regarding removal for wcDocker and replace it with rc-dock #6479