From: | AgentM <agentm(at)themactionfaction(dot)com> |
---|---|
To: | postgres hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Optimizer improvements: to do or not to do? |
Date: | 2006-09-13 19:29:45 |
Message-ID: | 89D75997-2F80-4B2B-88E7-1C481DB84291@themactionfaction.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sep 13, 2006, at 14:44 , Gregory Stark wrote:
>
> I think we need a serious statistics jock to pipe up with some
> standard
> metrics that do what we need. Otherwise we'll never have a solid
> footing for
> the predictions we make and will never know how much we can trust
> them.
>
> That said I'm now going to do exactly what I just said we should
> stop doing
> and brain storm about an ad-hoc metric that might help:
>
> I wonder if what we need is something like: sort the sampled values
> by value
> and count up the average number of distinct blocks per value. That
> might let
> us predict how many pages a fetch of a specific value would
> retrieve. Or
> perhaps we need a second histogram where the quantities are of
> distinct pages
> rather than total records.
>
> We might also need a separate "average number of n-block spans per
> value"
> metric to predict how sequential the i/o will be in addition to how
> many pages
> will be fetched.
Currently, statistics are only collected during an "ANALYZE". Why
aren't statistics collected during actual query runs such as seq
scans? One could turn such as beast off in order to get repeatable,
deterministic optimizer results.
-M
From | Date | Subject | |
---|---|---|---|
Next Message | Marshall, Steve | 2006-09-13 20:01:19 | Re: - Proposal for repreparing prepared statements |
Previous Message | Andrew Dunstan | 2006-09-13 18:54:05 | Re: CVS commit messages and backpatching |