From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | Niklas Hambüchen <mail(at)nh2(dot)me> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-bugs(at)lists(dot)postgresql(dot)org, ruben(at)benaco(dot)com, Niklas Hambüchen <niklas(at)benaco(dot)com> |
Subject: | Re: "memory exhausted" in query parser/simplifier for many nested parentheses |
Date: | 2024-12-13 17:43:03 |
Message-ID: | CAKFQuwa2c5Vjbbo4JoiJWdnqGd277yEfhVX-KJ7UYN+dPU8T7w@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Fri, Dec 13, 2024 at 6:53 AM Niklas Hambüchen <mail(at)nh2(dot)me> wrote:
> If I build some workaround today, e.g. splitting the query into multiple
> ones of max length N, how do I know it will still work in the future, e.g.
> if Postgres changes the Bison version or switches to a different parser?
>
If you work-around it by doing "create temp table" and "copy as many rows
into as you'd like" the reality of the limit here disappears.
You also gain the added benefit of having less potential exposure to
SQL-injection by using a code path that doesn't require placing many
potentially user-supplied string literals into a query body.
In some ways this is a design choice that encourages the user to write a
better query form that the system has been optimized around.
I don't disagree with the premise that such hard-coded limits are
undesirable but they also aren't always worth getting rid of, especially if
they are inherited from an upstream dependency.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-12-13 22:02:32 | Re: "memory exhausted" in query parser/simplifier for many nested parentheses |
Previous Message | Niklas Hambüchen | 2024-12-13 17:06:43 | Re: "memory exhausted" in query parser/simplifier for many nested parentheses |