From: | Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my> |
---|---|
To: | Alex Howansky <alex(at)wankwood(dot)com>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: (not) freeing cursors |
Date: | 2001-01-04 15:24:33 |
Message-ID: | 3.0.5.32.20010104232433.00884100@192.228.128.13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
At 02:28 PM 1/3/01 -0600, Alex Howansky wrote:
>
>I've got a table with about 150,000 rows, and I'm typically going to be
viewing
>it 4 rows at a time, based on a fairly complex multi-table join. The front
end
>application is web (PHP) based, and allows the user to do the "View Next
4" and
>"View Previous 4" thing. Currently, the PHP code maintains the offset and
>issues queries like this:
>
>select * from table where ... offset N limit 4
>
>My server is getting totally flogged though, and I'd like to experiment with
>cursors to try and alleviate some of the load. However, since my front end is
>web based, I have no control over when a user just closes the browser window
So far I'm doing the whole select and throwing away the unneeded results.
If you use offset and limit the database is the one throwing away the
results. In my case I want to know the number of records in a select so I
let the application throw away the results.
I decided on this approach rather than caching the results somewhere or
leaving the transaction open because of the difficulties on deciding when
to free up resources. This is because I was trying to cater for a scenario
where there could be many users at once.
For example if the user starts using more than one window and looks at lots
of results what do you do? Worse, say your resource timeouts are 10
minutes, what happens if you get 1000 users over a 10 minute interval?
If you are certain there will only be a small controlled number of users at
once, the caching/"reuse relevant transaction" approach can work.
Unfortunately there is no way to "continue" a transaction in a different
connection (hard to ensure that the user always hits the correct webapp
instance and thus persistent db connection). And even if you could (e.g.
CONTINUE transaction 1452354), if the user opens up two browser windows
during a session things could get messy! Actually this could be feasible in
a select only environment (search engine?) but this is quite a specialised
case.
About your server being flogged:
If memory is an issue you could try turning off buffering. In my setup, the
database does buffer the results somewhat, then the Perl DBI-DBD also
stores the full results which my webapp then retrieves row by row. However
it is possible to configure things so that the web app retrieves things row
by row more or less straight from the database. However this means you will
not be able to do other queries in that db connection until you are
finished with that query (this is sometimes necessary if you are doing
recursive stuff).
If not that many rows are being returned in your queries then there may be
a chance that things could be better optimized - e.g. more indexing, query
rewrites, or table/database reshaping. If like 50,000 rows are being
returned then well uh I dunno how to make it faster :(. Coz 150,000 rows
isn't that big, unless that's your result set :). I can get about 40 to
100+ hits per second with simple selects from a 500,000 row table,
throughput is about 1-2MB/sec (slower than disk I/O even when cached in mem
dunno why :( ).
Maybe some experts here can help with your queries and joins. How about you
give an example of your db and the complex query? As a start maybe the
results from the "EXPLAIN" of your query could be helpful.
Good luck,
Cheerio,
Link.
From | Date | Subject | |
---|---|---|---|
Next Message | Marco Catunda | 2001-01-04 16:05:02 | Doesn't use index, why? |
Previous Message | Richard Huxton | 2001-01-04 14:44:51 | Re: select distinct null |