From: | Mark Lewis <mark(dot)lewis(at)mir3(dot)com> |
---|---|
To: | Alessandro Baretta <a(dot)baretta(at)barettadeit(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Suspending SELECTs |
Date: | 2006-01-16 18:45:09 |
Message-ID: | 1137437109.12082.60.camel@archimedes |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, 2006-01-16 at 11:13 +0100, Alessandro Baretta wrote:
> I am aware that what I am dreaming of is already available through cursors, but
> in a web application, cursors are bad boys, and should be avoided. What I would
> like to be able to do is to plan a query and run the plan to retreive a limited
> number of rows as well as the executor's state. This way, the burden of
> maintaining the cursor "on hold", between activations of the web resource which
> uses it, is transferred from the DBMS to the web application server, and, most
> importantly, the responsibility for garbage-collecting stale cursors is
> implicitely delegated to the garbage-collector of active user sessions. Without
> this mechanism, we are left with two equally unpleasant solutions: first, any
> time a user instantiates a new session, a new cursor would have to be declared
> for all relevant queries, and an ad-hoc garbage collection daemon would have to
> be written to periodically scan the database for stale cursors to be closed;
> otherwise, instead of using cursors, the web application could resort to
> OFFSET-LIMIT queries--no garbage collection issues but pathetic performance and
> server-load.
>
> Do we have any way out?
>
> Alex
I know that Tom has pretty much ruled out any persistent cursor
implementation in the database, but here's an idea for a workaround in
the app:
Have a pool of connections used for these queries. When a user runs a
query the first time, create a cursor and remember that this user
session is associated with that particular connection. When the user
tries to view the next page of results, request that particular
connection from the pool and continue to use the cursor. Between
requests, this connection could of course be used to service other
users.
This avoids the awfulness of tying up a connection for the entire course
of a user session, but still allows you to use cursors for
performance.
When a user session is invalidated or times out, you remove the mapping
for this connection and close the cursor. Whenever there are no more
mappings for a particular connection, you can use the opportunity to
close the current transaction (to prevent eternal transactions).
If the site is at all busy, you will need to implement a pooling policy
such as 'do not open new cursors on the connection with the oldest
transaction', which will ensure that all transactions can be closed in a
finite amount of time, the upper bound on the duration of a transaction
is (longest_session_duration * connections in pool).
Limitations:
1. You shouldn't do anything that acquires write locks on the database
using these connections, because the transactions will be long-running.
To mitigate this, use a separate connection pool.
2. Doesn't work well if some queries take a long time to run, because
other users may need to wait for the connection, and another connection
won't do.
3. If this is a busy web site, you might end up with potentially many
thousands of open cursors. I don't know if this introduces an
unacceptable performance penalty or other bottleneck in the server?
-- Mark Lewis
From | Date | Subject | |
---|---|---|---|
Next Message | Craig A. James | 2006-01-16 21:19:27 | Re: Suspending SELECTs |
Previous Message | Tom Lane | 2006-01-16 18:10:38 | Re: Suspending SELECTs |