From: | Eric Schwarzenbach <Eric(dot)J(dot)Schwarzenbach(dot)C88(at)alumni(dot)upenn(dot)edu> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | [Fwd: Re: GEQO randomness?] |
Date: | 2008-10-31 22:57:25 |
Message-ID: | 490B8D55.7030409@alumni.upenn.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
My problem with GEQO using a random number generator is that
non-deterministic behavior is really hard to debug, and problems can go
undiagnosed for ages. Frankly I would rather something fail all the
time, than it work most of the time and fail just now and then. Never
getting a good plan for a query would be an improvement because I would
immediately be aware there's a problem and be forced to something about
it, as opposed to maybe realizing there is going to *sometimes* be a
problem.
Suppose a complex query, like mine, had an even more rarely occurring
bad plan result, where as with mine, now and then the query would simply
go out to lunch for all intents and purposes and bog down the server for
the next 30 minutes. But suppose that result was rarer than in my case,
and the developer never saw it, and blithely sent it out into
production. Every now and then the system would start performing
horribly and no one would know why. The developers might hear of it and
bring in the debugger, and perhaps simply never duplicate it because
it's so erratic. In fact, I'd be willing to bet there are any number of
production applications out in the wild using postgresql with that very
problem and the problem is just never traced back to postgresql.
I'm sorry if I sound strident, but I feel strongly about non-determinacy
in system being a Bad Thing, and wish to convey why. I understand from
the documentation that the postgresql team is aware the algorithm is not
ideal, and appreciate the non-triviality of replacing it. I do
appreciate your responses and your suggestions.
For my own case, I'll certainly be doing one or more of the alternatives
you mentioned (#1 for the short term, at least), and I've had #3 in mind
even before I ran into this problem (the only question is when I will
have time to do it).
Thanks again,
Eric
Tom Lane wrote:
> Eric Schwarzenbach <subscriber(at)blackbrook(dot)org> writes:
>
>> Now ordinarily I would interpret this use of the word "random" loosely, to
>> mean "arbitrarily" or "using some non-meaningful selection criteria". But
>> given what I am seeing, this leads me to consider that "random" is meant
>> literally, and that it actually uses a random number generator to choose paths. Can
>> someone confirm that this really is the case?
>>
>
> What it's doing is searching a subset of the space of all possible join
> orders. It still picks the best (according to cost estimate) plan
> within that subset, but if you're unlucky there may be no very good plan
> in that subset. And yes, there is a random number generator in there.
>
>
>> If so, I is this really a good idea?
>>
>
> The alternatives are not very appealing either ...
>
>
>> I would think it would be much more sensible to have it
>> operate deterministically (such as with some predetermined random
>> sequence of numbers used repeatedly).
>>
>
> ... in particular, that one's hardly a panacea. For one thing, a
> not-unlikely outcome would be that you *never* get a good plan and thus
> don't even get a hint that you might be missing something. For another,
> the data values used in the query and the current ANALYZE statistics
> also affect the search, which means that in the real world where those
> things change, you'd still be exposed to getting the occasional
> unexpectedly bad plan.
>
> There are a number of alternatives you can consider though:
>
> 1. Disable geqo or bump up the threshold enough that it's not used for
> your query. Whether this is a feasible answer is impossible to say with
> the limited detail you've provided. (Remember that potentially
> exponential search time.)
>
> 2. Increase geqo_effort to make the randomized search run a bit longer
> and examine more plans. This just decreases the probability of losing,
> but maybe it will do so enough that you won't care anymore.
>
> 3. Figure out what's a good join order, rewrite your query to explicitly
> join in that order, and *decrease* join_collapse_limit to force the
> planner to follow that order instead of searching. Permanent solution
> but the initial development effort is high, especially if you have a lot
> of different queries that need this treatment.
>
> 4. Write a better randomized-search algorithm and submit a patch ;-)
> We have good reason to think that the GEQO code is not a really
> intelligent approach to doing randomized plan searching --- it's based
> on an algorithm designed to solve traveling-salesman problems, which is
> not such a good match to join-order problems --- but no one's yet gotten
> motivated to replace it.
>
> regards, tom lane
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Eric Schwarzenbach | 2008-10-31 23:12:15 | [Fwd: Re: GEQO randomness?] |
Previous Message | Ron Mayer | 2008-10-31 22:33:39 | Re: Are there plans to add data compression feature to postgresql? |