Re: detecting poor query plans

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Greg Stark <gsstark(at)mit(dot)edu>
Cc: Neil Conway <neilc(at)samurai(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: detecting poor query plans
Date: 2003-11-26 22:14:36
Message-ID: 18130.1069884876@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Greg Stark <gsstark(at)mit(dot)edu> writes:
> That's a valid point. The ms/cost factor may not be constant over time.
> However I think in the normal case this number will tend towards a fairly
> consistent value across queries and over time. It will be influenced somewhat
> by things like cache contention with other applications though.

I think it would be interesting to collect the numbers over a long
period of time and try to learn something from the averages. The real
hole in Neil's original suggestion was that it assumed that comparisons
based on just a single query would be meaningful enough to pester the
user about.

> On further thought the real problem is that these numbers are only available
> when running with "explain" on. As shown recently on one of the lists, the
> cost of the repeated gettimeofday calls can be substantial. It's not really
> feasible to suggest running all queries with that profiling.

Yeah. You could imagine a simplified-stats mode that only collects the
total runtime (two gettimeofday's per query is nothing) and the row
counts (shouldn't be impossibly expensive either, especially if we
merged the needed fields into PlanState instead of requiring a
separately allocated node). Not sure if that's as useful though.

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Joshua D. Drake 2003-11-26 22:19:40 Re: Limiting factors of pg_largeobject
Previous Message Hannu Krosing 2003-11-26 22:12:50 Re: ALTER COLUMN/logical column position