From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Denis Perchine <dyp(at)perchine(dot)com> |
Cc: | Fred_Zellinger(at)seagate(dot)com, pgsql-general(at)hub(dot)org |
Subject: | Re: Large Tables(>1 Gb) |
Date: | 2000-06-30 15:32:16 |
Message-ID: | 19026.962379136@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Denis Perchine <dyp(at)perchine(dot)com> writes:
> 2. Use limit & offset capability of postgres.
> select * from big_table limit 1000 offset 0;
> select * from big_table limit 1000 offset 1000;
This is a risky way to do it --- the Postgres optimizer considers
limit/offset when choosing a plan, and is quite capable of choosing
different plans that yield different tuple orderings depending on the
size of the offset+limit. For a plain SELECT as above you'd probably
be safe enough, but in more complex cases such as having potentially-
indexable WHERE clauses you'll very likely get bitten, unless you have
an ORDER BY clause to guarantee a unique tuple ordering.
Another advantage of FETCH is that you get a consistent result set
even if other backends are modifying the table, since it all happens
within one transaction.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Wieck | 2000-06-30 15:56:19 | Re: pg_dumpall and check constraints |
Previous Message | Martijn van Oosterhout | 2000-06-30 15:15:58 | Re: disk backups |