From: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
---|---|
To: | Micah Yoder <yodermk(at)home(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Practical Cursors |
Date: | 2001-09-25 19:27:58 |
Message-ID: | 200109251927.f8PJRxX01701@saturn.jw.home |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Micah Yoder wrote:
> (sorry to reply to a week-old message. need to keep up with this list more!)
>
> On Monday 17 September 2001 17:04, you wrote:
>
> > There is an obvious benefit to the use of cursors within a persistent
> > environment. In other words, if my connection to the database is live, I
> > can increase my query and display efficiency through the use of a cursor.
> >
> > However, this seems to be useless within a web based environment. If we
> > have a live connection through a C++ application, we can perform a
> > transaction and interact within the results.
>
> Yep. That seems to be a disadvantage with ALL database systems & HTTP based
> apps.
>
> I once wrote a MySQL app (I know, but that's what the company used) to do a
> fairly complicated search on a huge database full of domain names. The query
> was time consuming (10-30 seconds) so it obviously could not be performed for
> every prev/next page request.
>
> My first approach was to have the PHP script write the entire data resultset
> to a fixed-length file, which could be easily accessed for each request to
> the point where the user was in the file. Only problem there was when the
> result set was large, initial query time was significantly longer. And that
> happened a lot.
>
> I then wrote a daemon in C to do the work and store the results in RAM. The
> PHP script connected to the daemon via a socket, and passed a request ID and
> the numbers of the records it wanted. Sure, it was convoluted, but I
> actually got the speed up to where I was fairly happy with it.
>
> If there's a better solution than that, I'm not aware of it.
>
> But like someone else mentioned, it's not quite "practical" database usage.
Since search engines and data warehousing tend to have huge
databases with sometimes complicated, long running queries
that produce empty to huge result sets, it's a quite common
problem. Thus, I would consider any solution that leads to
success at first "practical".
PHP together with cursors might be an alternate solution. You
open a cursor for the entire result set. You have a function
that fetches the next n rows from the cursor and generates
the resulting html output in a file. It returns true if more
rows have been found. You call it once and if it returns
false display "No match found" or so. If it returns true, you
call it again to create a cache file for the second result
page, and know if there will be one (telling you if to
provide a NEXT button). You register a shutdown function that
will call the cache file generator another m times. Now you
display the first cache file, leave the DB connection with
the open transaction and cursor where they are and exit.
The user will already see the first result page while your
server is still working. After calling the cache file
generator function m times, the shutdown function closes the
cursor, terminates the transaction and closes the DB
connection.
I think 95% of users will not hit NEXT more than 10 times
before refining their search, so that should be enough. If
one really does, well, than you'd have to run the entire
query again and this time create either more cache files or
all of them.
Now you need some sort of vacuum cleaner for the cache files
and are done.
The drawback for this solution is, that you don't know how
many pages there will be in total when you display the first
one. But the benefits are that it fit's into the
connectionless HTTP nature, has a small resource footprint,
provides first results early and does not require open
transactions over user interaction.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
_________________________________________________________
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com
From | Date | Subject | |
---|---|---|---|
Next Message | Marshall Spight | 2001-09-25 20:48:26 | Re: virtual filesystem atop a PostgreSQL database |
Previous Message | Pedro Alves | 2001-09-25 19:01:03 | Re: upper case constraint? |