From: | rminnett(at)rsmas(dot)miami(dot)edu (Rupert) |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Cursors with Large, Ordered Result Sets |
Date: | 2003-03-28 00:32:45 |
Message-ID: | 57a87d50.0303271632.74b6d281@posting.google.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Thanks for the quick reply and sorry for the slow response.
Yes, this is very similar to what we are currently doing and it seems
to be working rather well - much to my surprise. However, I still have
the same questions regarding the actual steps being taken by the DBMS
to order a massive result set. Doesn't it need to have the entire
result in memory before it can return the first records? If so, and
the result is larger than the RAM, does it dump it to disk and then
sort?
The reason I am so curios is simply because this is running on a
mission-critical machine and I need to know what resources
(particularly disk space) will be consumed.
Thanks for your help,
Rupert
CoL <col(at)mportal(dot)hu> wrote in message news:<b4h7kn$22ml$1(at)news(dot)hub(dot)org>...
> In first view, how about using offset and limit?
>
> select ... order by field offset 0 limit 10
> cursor fetch ... if(data < 10*1024)
> select ... order by field offset 10 limit 10
> cursor fetch ... if(data < 10*1024)
> select ... order by field offset 20 limit 10
> ....
> C.
From | Date | Subject | |
---|---|---|---|
Next Message | Martijn van Oosterhout | 2003-03-28 01:39:21 | Re: About OIDs |
Previous Message | Arjen van der Meijden | 2003-03-28 00:11:15 | Re: Slow query needs a kick in the pants. |