From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: increasing collapse_limits? |
Date: | 2011-05-01 20:17:06 |
Message-ID: | 4F274948-F668-4534-B40C-F3FC5DAAA782@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Apr 30, 2011, at 10:21 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> It also occurs to me to wonder if we could adjust the limit on-the-fly
> based on noticing whether or not the query is prone to worst-case
> behavior, ie how dense is the join connection graph.
I've had this thought - or a similar one - before also. I am not sure how to make it work mechanically but I think it would be tremendous if we could make it work. For most people, my previous naive suggestion (remove the limit entirely) would actually work fine, BUT if you hit the problem cases then even a small increase is too much. So I don't really think increasing the limit will eliminate the need for manual fiddling - what we really need to do is come up with a more accurate measure of measure of complexity than "number of tables".
...Robert
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Riggs | 2011-05-01 20:31:31 | Re: Proposed patch: Smooth replication during VACUUM FULL |
Previous Message | Robert Haas | 2011-05-01 19:57:21 | Re: branching for 9.2devel |