From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, Ogden <lists(at)darkstatic(dot)com>, Tomas Vondra <tv(at)fuzzy(dot)cz>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Performance |
Date: | 2011-04-13 21:52:36 |
Message-ID: | 25734.1302731556@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Claudio Freire <klaussfreire(at)gmail(dot)com> writes:
> On Wed, Apr 13, 2011 at 4:32 PM, Kevin Grittner
> <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:
>> If you model the costing to reflect the reality on your server, good
>> plans will be chosen.
> Wouldn't it be "better" to derive those costs from actual performance
> data measured at runtime?
> Say, pg could measure random/seq page cost, *per tablespace* even.
> Has that been tried?
Getting numbers that mean much of anything is a slow, expensive
process. You really don't want the database trying to do that for you.
Once you've got them, you *really* don't want the database
editorializing on them.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2011-04-13 21:54:53 | Re: Performance |
Previous Message | Claudio Freire | 2011-04-13 21:37:41 | Re: Slow query postgres 8.3 |