| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Josh Berkus <josh(at)agliodbs(dot)com> |
| Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
| Subject: | Re: Shortcutting too-large offsets? |
| Date: | 2011-09-30 14:36:50 |
| Message-ID: | 6843.1317393410@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Josh Berkus <josh(at)agliodbs(dot)com> writes:
> Here's a case which it seems like we ought to be able to optimize for:
> [ offset skips all the output of a sort node ]
> Is there some non-obvious reason which would make this kind of
> optimization difficult? Doesn't the executor know at that point how
> many rows it has?
In principle, yeah, we could make it do that, but it seems like a likely
source of maintenance headaches. This example is not exactly compelling
enough to make me want to do it. Large OFFSETs are always going to be
problematic from a performance standpoint, and the fact that we could
short-circuit this one corner case isn't really going to make them much
more usable.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | pasman pasmański | 2011-09-30 15:08:15 | Re: Shortcutting too-large offsets? |
| Previous Message | bricklen | 2011-09-30 14:28:36 | Re: array_except -- Find elements that are not common to both arrays |