From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | jason(at)mbi(dot)ucla(dot)edu |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: large table problem |
Date: | 2007-04-20 21:52:03 |
Message-ID: | 22832.1177105923@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
"Jason Nerothin" <jasonnerothin(at)gmail(dot)com> writes:
> Attempt number 2, now underway, is to pass
> LIMIT and OFFSET values to the query which Postgres handles quite
> effectively as long as the OFFSET value is less than the total number of
> rows in the table. When the value is greater than <num_rows>, the query
> hangs for minutes.
I don't actually believe the above; using successively larger offsets
should get slower and slower in a smooth manner, because the only thing
OFFSET does is throw away scanned rows just before they would have been
returned to the client. I think you've confused yourself somehow.
> the documentation suggests that cursor behavior is a little buggy for the
> current postgres driver.
How old a driver are you using? Because a cursor is definitely what you
want to use for retrieving millions of rows.
It strikes me that pgsql-jdbc might be a more suitable group of people
to ask about this than the -general list ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jonathan Vanasco | 2007-04-20 21:56:25 | Re: unique constraint on 2 columns |
Previous Message | Vladimir Zelinski | 2007-04-20 21:43:38 | Re: unique constraint on 2 columns |