Re: PostgreSQL seems to create inefficient plans in simple conditional joins

From: Vitalii Tymchyshyn <vit(at)tym(dot)im>
To: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
Cc: Hedayat Vatankhah <hedayat(dot)fwd(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org>
Subject: Re: PostgreSQL seems to create inefficient plans in simple conditional joins
Date: 2016-01-30 18:52:09
Message-ID: CABWW-d0cDZNXUiPgJ0TPgTL5ENQ3km7LCMCcdziwR1pWrMpZBQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Well, as I can see it was just few phrases unless I miss something. May be
it's worth to bring it to -hackers for a wider discussion?

Best regards, Vitalii Tymchyshyn

Сб, 30 січ. 2016 12:31 David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> пише:

> On 31 January 2016 at 06:14, Vitalii Tymchyshyn <vit(at)tym(dot)im> wrote:
> > It may be more for -hackers, but I often hear "this wont be used because
> of
> > planning time increase". Now as I know we have statistics on real query
> time
> > after few runs that is used to decide if plan should be switched.
> > Can this statistics be used to apply advanced planning features for
> > relatively long running queries? E.g. a parameter like
> > sophisticated_planning_l1_threshold=500ms. If query runs over this
> > threshold, replan it with more sophisticated features taking few more
> > millis. Possibly different levels can be introduced. Also allow to set
> > threshold to 0, saying "apply to all queries right away".
> > Another good option is to threshold against cumulative query time. E.g.
> if
> > there was 10000 runs 0.5 millis each, it may be beneficial to spend few
> > millis to get 0.2 millis each.
>
> I agree with you. I recently was working with long running queries on
> a large 3TB database. I discovered a new optimisation was possible,
> and wrote a patch to implement. On testing the extra work which the
> optimiser performed took 7 micoseconds, and this saved 6 hours of
> execution time. Now, I've never been much of an investor in my life,
> but a 3 billion times return on an investment seems quite favourable.
> Of course, that's quite an extreme case, but it's hard to ignore the
> benefit is still significant in less extreme cases.
>
> The idea you've mentioned here is very similar to what I bought up at
> the developer meeting a few days ago, see AOB section in [1]
>
> Unfortunately I didn't really get many of the correct people on my
> side with it, and some wanted examples of specific patches, which is
> completely not what I wanted to talk about. I was more aiming for some
> agreement for generic infrastructure to do exactly as you describe.
>
> [1] https://wiki.postgresql.org/wiki/FOSDEM/PGDay_2016_Developer_Meeting
>
>
> --
> David Rowley http://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Training & Services
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Hedayat Vatankhah 2016-01-30 21:50:53 Re: PostgreSQL seems to create inefficient plans in simple conditional joins
Previous Message David Rowley 2016-01-30 17:31:05 Re: PostgreSQL seems to create inefficient plans in simple conditional joins