From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
Cc: | Chris <chris(at)bitmead(dot)com>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Solution for LIMIT cost estimation |
Date: | 2000-02-14 19:27:49 |
Message-ID: | 22628.950556469@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Philip Warner <pjw(at)rhyme(dot)com(dot)au> writes:
> Just my 0.02c, but I don't like putting limits on an optimizer.
That's my feeling too. I'm leaning towards letting the optimizer do the
best it can with the given query (which means using OFFSET+LIMIT as the
estimated number of tuples to be fetched), and documenting the potential
gotcha as best we can. Something like:
CAUTION: if you repeat a query several times with different OFFSET or
LIMIT values to fetch different portions of the whole result, you will
find that you get inconsistent results unless you specify an ORDER BY
condition that is strong enough to ensure that all selected tuples must
appear in a unique order. Without ORDER BY, the system is free to
return the tuples in any order it finds convenient --- and it may well
make different implementation choices leading to different orderings
depending on the OFFSET and LIMIT values. In general, you should be
very wary of using OFFSET or LIMIT with an unordered or partially
ordered query; you will get a difficult-to-predict, implementation-
dependent subset of the selected tuples.
Is that clear enough? Can anyone improve on the wording?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Don Baccus | 2000-02-14 19:36:34 | Re: [HACKERS] Solution for LIMIT cost estimation |
Previous Message | Bruce Momjian | 2000-02-14 17:28:00 | Re: [PATCHES] Re: [HACKERS] Almost there on column aliases |