From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Andrei Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru>, Alexander Korotkov <aekorotkov(at)gmail(dot)com>, Alena Rybakina <a(dot)rybakina(at)postgrespro(dot)ru>, pgsql-hackers(at)postgresql(dot)org, "Finnerty, Jim" <jfinnert(at)amazon(dot)com>, Marcos Pegoraro <marcos(at)f10(dot)com(dot)br>, teodor(at)sigaev(dot)ru, Ranier Vilela <ranier(dot)vf(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, Peter Eisentraut <peter(at)eisentraut(dot)org> |
Subject: | Re: POC, WIP: OR-clause support for indexes |
Date: | 2023-11-28 01:07:46 |
Message-ID: | CAH2-Wzm2=nf_JhiM3A2yetxRs8Nd2NuN3JqH=fm_YWYd1oYoPg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Nov 27, 2023 at 4:07 PM Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> > I am sure that there is a great deal of truth to this. The general
> > conclusion about parse analysis being the wrong place for this seems
> > very hard to argue with. But I'm much less sure that there needs to be
> > a conventional cost model.
>
> I'm not sure about that part, either. The big reason we shouldn't do
> this in parse analysis is that parse analysis is supposed to produce
> an internal representation which is basically just a direct
> translation of what the user entered. The representation should be
> able to be deparsed to produce more or less what the user entered
> without significant transformations. References to objects like tables
> and operators do get resolved to OIDs at this stage, so deparsing
> results will vary if objects are renamed or the search_path changes
> and more or less schema-qualification is required or things like that,
> but the output of parse analysis is supposed to preserve the meaning
> of the query as entered by the user.
One of the reasons why we shouldn't do this during parse analysis is
because query rewriting might matter. But that doesn't mean that the
transformation/normalization process must fundamentally be the
responsibility of the optimizer, through process of elimination.
Maybe it should be the responsibility of some other phase of query
processing, invented solely to make life easier for the optimizer, but
not formally part of query planning per se.
> The right place to do
> optimization is in the optimizer.
Then why doesn't the optimizer do query rewriting? Isn't that also a
kind of optimization, at least in part?
> > The planner's cost model is supposed to have some basis in physical
> > runtime costs, which is not the case for any of these transformations.
> > Not in any general sense; they're just transformations that enable
> > finding a cheaper way to execute the query. While they have to pay for
> > themselves, in some sense, I think that that's purely a matter of
> > managing the added planner cycles. In principle they shouldn't have
> > any direct impact on the physical costs incurred by physical
> > operators. No?
>
> Right. It's just that, as a practical matter, some of the operators
> deal with one form better than the other. So if we waited until we
> knew which operator we were using to decide on which form to pick,
> that would let us be smart.
ISTM that the real problem is that this is true in the first place. If
the optimizer had only one representation for any two semantically
equivalent spellings of the same qual, then it would always use the
best available representation. That seems even smarter, because that
way the planner can be dumb and still look fairly smart at runtime.
I am trying to be pragmatic, too (at least I think so). If having only
one representation turns out to be very hard, then maybe they weren't
ever really equivalent -- meaning it really is an optimization
problem, and the responsibility of the planner. It seems like it would
be more useful to spend time on making the world simpler for the
optimizer, rather than spending time on making the optimizer smarter.
Especially if we're talking about teaching the optimizer about what
are actually fairly accidental differences that come from
implementation details.
I understand that it'll never be black and white. There are practical
constraints on how far you go with this. We throw around terms like
"semantically equivalent" as if everybody agreed on precisely what
that means, which isn't really true (users complain when their view
definition has "<>" instead of "!="). Even still, I bet that we could
bring things far closer to this theoretical ideal, to good effect.
> > As I keep pointing out, there is a sound theoretical basis to the idea
> > of normalizing to conjunctive normal form as its own standard step in
> > query processing. To some extent we do this already, but it's all
> > rather ad-hoc. Even if (say) the nbtree preprocessing transformations
> > that I described were something that the planner already knew about
> > directly, they still wouldn't really need to be costed. They're pretty
> > much strictly better at runtime (at most you only have to worry about
> > the fixed cost of determining if they apply at all).
>
> It's just a matter of figuring out where we can put the logic and have
> the result make sense. We'd like to put it someplace where it's not
> too expensive and gets the right answer.
Agreed.
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2023-11-28 01:14:56 | Re: SSL tests fail on OpenSSL v3.2.0 |
Previous Message | Peter Smith | 2023-11-28 00:54:33 | Re: GUC names in messages |