Re: large query by offset and limt

From: Steve Atkins <steve(at)blighty(dot)com>
To: PostgreSQL General <pgsql-general(at)postgresql(dot)org>
Subject: Re: large query by offset and limt
Date: 2008-05-03 18:17:00
Message-ID: AD64242C-DE11-4793-A470-F9CAE5DA45D6@blighty.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


On May 2, 2008, at 2:01 PM, finecur wrote:

> Hi, I am ruuning a database behind a webserver and there is a table
> which is huge. I need to pull data from this table and send to user
> through http. If I use
>
> select * from huge_table where userid = 100
>
> It will return millions of records which exhuasts my server's memory.
> So I do this:
>
> select * from huge_table where userid = 100 limit 1000 offset 0
> and then send the results to user, then
>
> select * from huge_table where userid = 100 limit 1000 offset 1000
> and then send the results to user, then
>
> select * from huge_table where userid = 100 limit 1000 offset 2000
> and then send the results to user,
>
> Continue this until there is no records available
>
> It runs great but it is kind of slow. I think it is because even I
> need only 1000 records, the query search the whole table every time.

Not quite - if you do a "limit 1000 offset 5000" it'll stop after
retrieving
the first 6000 from the table. A bigger problem with doing it this
way is that the results aren't particularly well defined unless there's
an order by statement in the query.

>
>
> Is there a better way to do this?

You want a cursor. See http://www.postgresql.org/docs/8.3/static/sql-declare.html

Cheers,
Steve

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Craig Ringer 2008-05-03 18:19:14 Re: clustering without locking
Previous Message Dan "Heron" Myers 2008-05-03 17:58:41 Re: custom C function problem