Re: Persistent Connections

From: "Bee(dot)Lists" <bee(dot)lists(at)gmail(dot)com>
To: pgsql-general(at)lists(dot)postgresql(dot)org
Subject: Re: Persistent Connections
Date: 2020-06-24 17:45:21
Message-ID: EBE1A06C-F4CA-4CA5-BFE1-FD25D828DF3A@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


> On Jun 23, 2020, at 8:09 PM, Tim Cross <theophilusx(at)gmail(dot)com> wrote:
>
> Sounds like your web app may not be closing connections once it has
> finished with them? The fact your seeing idle connections would seem to
> support this. I would be verifying there isn't a code path in your
> client application which is failing to close a connection correctly. Seeing
> connections go up and down in the gem may not be telling you the full story - could
> be that your client connection objects are being destroyed in your app,
> but are not closing the connection correctly so PG is unaware the client
> has finished.

Hi Tim. I can’t speak for the gem. I’m assuming its garbage collection is working. But yes, it does look that way. I found someone else who was having similar issues as myself:

https://stackoverflow.com/questions/60843123/djangopostgres-fatal-sorry-too-many-clients-already

I’m also seeing the connection count rise overnight from crontabs.

For some clarity, the gem is Sequel, which is on top of the PG gem (Ruby). I’ve spoken to the Sequel author and he says everything is fine. I have some evidence it’s a connection issue and the gem is doing its job, as I’m seeing it’s happening elsewhere with crontabs and other clients.

> Typically, due to the overhead of making a connection, you don't want
> your client app/web app to create a new connection for every query.
> Instead, you would use some type of connection pool. Many development
> languages support some form of pooling as part of their db connection
> library (don't know about Ruby, but JS, Python, PHP, Java, Perl all do)
> and there are external 3rd party solutions like pgbouncer which sit
> between your client and the database and work a bit like a connection
> broker which will manage a pool of connections.

That’s why I’m thinking installing a connection pooler would solve all of this. pgbouncer is what I’m looking at now.

> From the description of what your doing, I would first look to see what
> level of connection pooling your development language supports. This
> will likely be easier to configure and use than setting up a whole
> additional bit of infrastructure which is probably far more powerful
> than you need.

I will do that. This is all new.

> I would also go through your code and make sure that every time you
> create a database connection, there is some code which is closing that
> connection once your finished with it. This is a frequent source of
> problems, especially during development when your code might throw an
> error and the code you have to disconnect does not get executed or you
> simply forget to issue a disconnect when your finished.

The issue here is that it’s all in the gem. The gem is actually an ORM, built on top of the PG gem. So all the database connection stuff is inside there. It’s so abstracted that I don’t know when the actual calls are made. It’s a DSL so the workings aren’t evident when making queries. It was suggested I install a logger to see what and when queries are actually made.

> A connection pool can help in tracking down such issues as well. Most
> pooling solutions will allow you to set a max pool size. In addition to
> enabling you to 'reserve' a set number of connections for a client, you
> will know which client seems to be running out of connections, helping
> to identify the culprit.

That SO link above suggests I’m not alone. So installing a connection pooler seems the right thing to do. Good to know about the reservations. That is better feedback.

Cheers, Bee

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bee.Lists 2020-06-24 17:55:00 Re: Persistent Connections
Previous Message Thomas Boussekey 2020-06-24 15:57:14 Re: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked