From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)commandprompt(dot)com> |
Cc: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Incorrect behavior with CE and ORDER BY |
Date: | 2006-10-25 03:34:03 |
Message-ID: | 26540.1161747243@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Alvaro Herrera <alvherre(at)commandprompt(dot)com> writes:
> Is this possible? It would be very fast.
It's possible but not exactly simple. As an example, your proposed
plan:
> Limit (50)
> Sort (key: pse_lastlogin)
> Result
> Append
> Limit (50)
> SeqScan tbl_profile_search
> Limit (50)
> Indexscan tbl_profile_search_interest_1
> Limit (50)
> IndexScan on the index mentioned above
is wrong because there's no guarantee that the first 50 elements of a
seqscan will be anything special. You could imagine dealing with that
by sorting the seqscan results and limiting to 50, or by not
sorting/limiting that data at all but letting the upper sort see all the
seqscan entries. Offhand I think either of those could win depending on
how many elements the seqscan will yield. Also, it might be interesting
to consider inventing a "merge" plan node type that takes N
already-sorted inputs and produces a sorted output stream. Then we'd
need to trade off this approach versus doing the top-level sort, which
could cope with some of its inputs not being pre-sorted.
This seems to have some aspects in common with the recent discussion
about how to optimize min/max aggregates across an appendrel set.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2006-10-25 03:48:03 | Re: [HACKERS] Replication documentation addition |
Previous Message | Oliver Jowett | 2006-10-25 03:26:07 | Re: [HACKERS] server process (PID 1188) exited with exit code |