From: | Mark Kirkwood <markir(at)coretech(dot)co(dot)nz> |
---|---|
To: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | "Jim C(dot) Nasby" <decibel(at)decibel(dot)org>, Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>, pgsql(at)mohawksoft(dot)com, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Query optimizer 8.0.1 (and 8.0) |
Date: | 2005-02-20 20:30:41 |
Message-ID: | 4218F371.90905@coretech.co.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Bruce Momjian wrote:
> Jim C. Nasby wrote:
>
>>On Mon, Feb 14, 2005 at 09:55:38AM -0800, Ron Mayer wrote:
>>
>>
>>>I still suspect that the correct way to do it would not be
>>>to use the single "correlation", but 2 stats - one for estimating
>>>how sequential/random accesses would be; and one for estimating
>>>the number of pages that would be hit. I think the existing
>>>correlation does well for the first estimate; but for many data
>>>sets, poorly for the second type.
>>
>>
>>Should this be made a TODO? Is there some way we can estimate how much
>>this would help without actually building it?
>
>
> I guess I am confused how we would actually do that or if it is
> possible.
>
I spent a while on the web looking for some known way to calculate
"local" correlation or "clumping" in some manner analogous to how we do
correlation currently. As yet I have only seen really specialized
examples that were tangentially relevant. We need a pet statistician to ask.
regards
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-02-20 22:10:42 | Can we remove SnapshotSelf? |
Previous Message | Mark Kirkwood | 2005-02-20 20:05:09 | Re: Data loss, vacuum, transaction wrap-around |