Re: detecting poor query plans

From: Greg Stark <gsstark(at)mit(dot)edu>
To: Neil Conway <neilc(at)samurai(dot)com>
Cc: Greg Stark <gsstark(at)mit(dot)edu>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: detecting poor query plans
Date: 2003-11-26 20:45:49
Message-ID: 8765h6x3ia.fsf@stark.dyndns.tv
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Neil Conway <neilc(at)samurai(dot)com> writes:

> I was thinking about this, but I couldn't think of how to get it to
> work properly:
>
> (1) The optimizer's cost metric is somewhat bogus to begin with.
> ISTM that translating a cost of X into an expected runtime of
> Y msecs is definitely not trivial to do.

At least for all the possible plans of a given query at a specific point in
time the intention is that the cost be proportional to the execution time.

> the exact time it takes to produce that result relation depends on a wide
> collection of external factors.

That's a valid point. The ms/cost factor may not be constant over time.
However I think in the normal case this number will tend towards a fairly
consistent value across queries and over time. It will be influenced somewhat
by things like cache contention with other applications though.

On further thought the real problem is that these numbers are only available
when running with "explain" on. As shown recently on one of the lists, the
cost of the repeated gettimeofday calls can be substantial. It's not really
feasible to suggest running all queries with that profiling.

--
greg

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Neil Conway 2003-11-26 21:10:01 Re: detecting poor query plans
Previous Message Neil Conway 2003-11-26 20:33:56 Re: detecting poor query plans