From: | Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: detecting poor query plans |
Date: | 2003-11-26 23:50:17 |
Message-ID: | Pine.LNX.4.58.0311271044100.2497@linuxworld.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > On further thought the real problem is that these numbers are only available
> > when running with "explain" on. As shown recently on one of the lists, the
> > cost of the repeated gettimeofday calls can be substantial. It's not really
> > feasible to suggest running all queries with that profiling.
>
> Yeah. You could imagine a simplified-stats mode that only collects the
> total runtime (two gettimeofday's per query is nothing) and the row
> counts (shouldn't be impossibly expensive either, especially if we
> merged the needed fields into PlanState instead of requiring a
> separately allocated node). Not sure if that's as useful though.
How about a PGC_POSTMASTER GUC variable which tells postgres to collect
details on the planner's performance and comparison to actual run times.
Optionally, we could also have the executor run some/all of the possible
plans (presumably only useful for SELECTs) and keep details on the
performance of each. At postmaster shutdown (some other time?) a report
could be produced profiling all queries.
The reason I suggest this is it would have zero impact on production
databases but would allow DBAs to profile their databases with real usage
patterns in development environments.
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2003-11-26 23:55:35 | Re: pg_restore and create FK without verification check |
Previous Message | Andreas Pflug | 2003-11-26 23:40:28 | Re: pg_restore and create FK without verification check |