From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Simon Riggs <simon(dot)riggs(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, Jakub Wartak <jakub(dot)wartak(at)enterprisedb(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Damage control for planner's get_actual_variable_endpoint() runaway |
Date: | 2022-11-21 15:32:28 |
Message-ID: | 3122555.1669044748@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> Is there any reason to tie this into page costs? I'd be more inclined
> to just make it a hard limit on the number of pages. I think that
> would be more predictable and less prone to surprising (bad) behavior.
Agreed, a simple limit of N pages fetched seems appropriate.
> And to be honest I would be inclined to make it quite a small number.
> Perhaps 5 or 10. Is there a good argument for going any higher?
Sure: people are not complaining until it gets into the thousands.
And you have to remember that the entire mechanism exists only
because of user complaints about inaccurate estimates. We shouldn't
be too eager to resurrect that problem.
I'd be happy with a limit of 100 pages.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2022-11-21 15:35:16 | Re: perform_spin_delay() vs wait events |
Previous Message | Jonathan S. Katz | 2022-11-21 15:31:14 | Re: heavily contended lwlocks with long wait queues scale badly |