From: | Christopher Masto <chris(at)netmonger(dot)net> |
---|---|
To: | Micah Yoder <yodermk(at)home(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Practical Cursors |
Date: | 2001-10-15 20:33:31 |
Message-ID: | 20011015163326.A8155@netmonger.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Sep 25, 2001 at 12:06:48AM -0400, Micah Yoder wrote:
> (sorry to reply to a week-old message. need to keep up with this list more!)
Ditto, but more so.
> I then wrote a daemon in C to do the work and store the results in RAM. The
> PHP script connected to the daemon via a socket, and passed a request ID and
> the numbers of the records it wanted. Sure, it was convoluted, but I
> actually got the speed up to where I was fairly happy with it.
>
> If there's a better solution than that, I'm not aware of it.
A technique I've used with some success is to select the primary keys
from the rows you're interested in, and only have to memorize a list of
integers. Then for each page, select the rows "WHERE pkey IN (...)".
It's sort of a middle ground as far as tradeoffs go. You don't have to
store a huge amount of data in RAM or temporary files, but you still
have to do the work up front.
The problem I have with persistent per-session connections is that you
end up having basically the same
(per-transaction-overhead * simultaneous-transactions), and you add
(per-connection-overhead * simultaneous-open-sessions) on top.
There are certainly situations where you can do better one way or the
other.. figuring out how to best tune the per-session case scares me.
--
Christopher Masto
CB461C61 8AFC E3A8 7CE5 9023 B35D C26A D849 1F6E CB46 1C61
From | Date | Subject | |
---|---|---|---|
Next Message | Chris Cameron | 2001-10-15 21:06:03 | Managing Users |
Previous Message | lee@avbrief | 2001-10-15 19:31:55 | Re: Easy way of pruning pg_data? |