Re: Connection pooling.

From: Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au>
To: Alfred Perlstein <bright(at)wintelcom(dot)net>
Cc: pgsql-hackers(at)hub(dot)org
Subject: Re: Connection pooling.
Date: 2000-07-12 03:48:20
Message-ID: 396BEA84.1A06F51F@nimrod.itg.telecom.com.au
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


Seems a lot trickier than you think. A backend can only be running
one transaction at a time, so you'd have to keep track of which backends
are in the middle of a transaction. I can imagine race conditions here.
And backends can have contexts that are set by various clients using
SET and friends. Then you'd have to worry about authentication each
time. And you'd have to have algorithms for cleaning up old processes
and/or dead processes. It all really sounds a bit hard.

Alfred Perlstein wrote:
>
> In an effort to complicate the postmaster beyond recognition I'm
> proposing an idea that I hope can be useful to the developers.
>
> Connection pooling:
>
> The idea is to have the postmaster multiplex and do hand-offs of
> database connections to other postgresql processes when the max
> connections has been exceeded.
>
> This allows several gains:
>
> 1) Postgresql can support a large number of connections without
> requiring a large amount of processes to do so.
>
> 2) Connection startup/finish will be cheaper because Postgresql
> processes will not exit and need to reninit things such as shared
> memory attachments and file opens. This will also reduce the load
> on the supporting operating system and make postgresql much 'cheaper'
> to run on systems that don't support the fork() model of execution
> gracefully.
>
> 3) Long running connections can be preempted at transaction boundries
> allowing other connections to gain process timeslices from the
> connection pool.
>
> The idea is to make the postmaster that accepts connections a broker
> for the connections. It will dole out descriptors using file
> descriptor passing to children. If there's a demand for connections
> meaning that all the postmasters are busy and there are pending
> connections the postmaster can ask for a yeild on one of the
> connections.
>
> A yeild involves the child postgresql process passing back the
> client connection at a transaction boundry (between transactions)
> so it can later be given to another (perhaps the same) child process.
>
> I spoke with Bruce briefly about this and he suggested that system
> tables containing unique IDs could be used to identify passed
> connections to the children and back to the postmaster.
>
> When a handoff occurs, the descriptor along with an ID referencing
> things like temp tables and enviornment variables and authentication
> information could be handed out as well allowing the child to resume
> service to the interrupted connection.
>
> I really don't have the knowledge of Postgresql internals to
> accomplish this, but the concepts are simple and the gains would
> seem to be very high.
>
> Comments?
>
> --
> -Alfred Perlstein - [bright(at)wintelcom(dot)net|alfred(at)freebsd(dot)org]
> "I have the heart of a child; I keep it in a jar on my desk."

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Tom Lane 2000-07-12 03:48:45 Performance problem in aset.c
Previous Message Philip Warner 2000-07-12 03:22:40 Re: Insert..returning (was Re: Re: postgres TODO)