From: | "Dave Dutcher" <dave(at)tridecap(dot)com> |
---|---|
To: | "'soni de'" <soni(dot)de(at)gmail(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Takes too long to fetch the data from database |
Date: | 2006-04-21 13:26:33 |
Message-ID: | 007801c66547$35e26270$8300a8c0@tridecap.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I've never used a cursor in Postgres, but I don't think it will help you
a lot. In theory cursors make it easier to do paging, but your main
problem is that getting the first page is slow. A cursor isn't going to
be any faster at getting the first page than OFFSET/LIMIT is.
Did you try Bruno's suggestion of:
SELECT * FROM wan ORDER BY stime DESC OFFSET 0 LIMIT 50;
You should run an EXPLAIN ANALYZE on that query to see if it is using an
index scan. Also what version of Postgres are you using? You can run
select version(); to check.
-----Original Message-----
From: pgsql-performance-owner(at)postgresql(dot)org
[mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of soni de
Sent: Thursday, April 20, 2006 11:42 PM
To: Merlin Moncure
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: [PERFORM] Takes too long to fetch the data from database
I don't want to query exactly 81900 rows into set. I just want to fetch
50 or 100 rows at a time in a decreasing order of stime.(i.e 50 or 100
rows starting from last to end).
if we fetched sequentially, there is also problem in fetching all the
records (select * from wan where kname='pluto' order by stime) it is
taking more than 4~5 minutes. tried it on same table having more than
326054 records.
On 4/20/06, Merlin Moncure <mmoncure(at)gmail(dot)com> wrote:
> SELECT * FROM wan ORDER BY stime LIMIT 50 OFFSET 81900;
you need to try and solve the problem without using 'offset'. you could
do:
BEGIN;
DECLARE crs cursor FOR SELECT * FROM wan ORDER BY stime;
FETCH ABSOLUTE 81900 in crs;
FETCH 49 in crs;
CLOSE crs;
COMMIT;
this may be a bit faster but will not solve the fundamental problem.
the more interesting question is why you want to query exactly 81900
rows into a set. This type of thinking will always get you into
trouble, absolute positioning will not really work in a true sql
sense. if you are browsing a table sequentially, there are much
better methods.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2006-04-21 13:44:25 | Re: Takes too long to fetch the data from database |
Previous Message | Jim Buttafuoco | 2006-04-21 12:38:58 | Re: Better way to write aggregates? |