From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Josh Berkus <josh(at)agliodbs(dot)com>, Tambet Matiisen <t(dot)matiisen(at)aprote(dot)ee>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: What about utility to calculate planner cost constants? |
Date: | 2005-03-22 16:19:40 |
Message-ID: | 87y8cfbqlf.fsf@stark.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Richard Huxton <dev(at)archonet(dot)com> writes:
> You'd only need to log them if they diverged from expected anyway. That should
> result in fairly low activity pretty quickly (or we're wasting our time).
> Should they go to the stats collector rather than logs?
I think you need to log them all. Otherwise when you go to analyze the numbers
and come up with ideal values you're going to be basing your optimization on a
skewed subset.
I don't know whether the stats collector or the logs is better suited to this.
> > (Also, currently explain analyze has overhead that makes this impractical.
> > Ideally it could subtract out its overhead so the solutions would be accurate
> > enough to be useful)
>
> Don't we only need the top-level figures though? There's no need to record
> timings for each stage, just work completed.
I guess you only need top level values. But you also might want some flag if
the row counts for any node were far off. In that case perhaps you would want
to discard the data point.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2005-03-22 16:22:59 | Re: Planner issue |
Previous Message | Rick Jansen | 2005-03-22 16:05:55 | Re: Tsearch2 performance on big database |