From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Peter Geoghegan <peter(at)2ndquadrant(dot)com>, PG Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Should we have an optional limit on the recursion depth of recursive CTEs? |
Date: | 2011-08-15 21:49:47 |
Message-ID: | CAM-w4HPAf7Tfj9Y+TmnG=UaFFhL0Y=y1VEmDH08W4CeaiATHhA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Aug 15, 2011 at 9:31 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> ... and that would be a seriously bad API. There are not SUSET
> restrictions on other resources such as work_mem. Why do we need
> one for this?
I think a better analogy would be imposing a maximum number of rows a
query can output. That might be a sane thing to have for some
circumstances but it's not useful in general.
Consider for instance my favourite recursive query application,
displaying the lock dependency graph for pg_locks. What arbitrary
maximum number of locks would you like to impose at which the query
should error out?
There is a situation though that I think is motivating this though
where it would be nice to detect a problem: when the query is such
that it *can't* produce a record because there's an infinite loop
before the first record. Ideally you want some way to detect that
you've recursed and haven't changed anything that could lead to a
change in the recursion condition. But that seems like a pretty hard
thing to detect, probably impossible.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri Fontaine | 2011-08-15 21:50:15 | Re: walprotocol.h vs frontends |
Previous Message | Magnus Hagander | 2011-08-15 21:32:06 | Re: walprotocol.h vs frontends |