From: | Peter van Hardenberg <pvh(at)pvh(dot)ca> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
Date: | 2012-02-09 22:41:34 |
Message-ID: | CAAcg=kXOiAi66tzP6hdFTSECjvKjLx65jUcMYQ1g3j4XdzOWDw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hmm, perhaps we could usefully aggregate auto_explain output.
On Thu, Feb 9, 2012 at 7:32 AM, Jeff Janes <jeff(dot)janes(at)gmail(dot)com> wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
>>> Having read the thread, I don't really see how I could study what a
>>> more principled value would be.
>>
>> Agreed. Just pointing out more research needs to be done.
>>
>>> That said, I have access to a very large fleet in which to can collect
>>> data so I'm all ears for suggestions about how to measure and would
>>> gladly share the results with the list.
>>
>> I wonder if some kind of script that grabbed random queries and ran
>> them with explain analyze and various random_page_cost to see when
>> they switched and which plans are faster would work?
>
> But if you grab a random query and execute it repeatedly, you
> drastically change the caching.
>
> Results from any execution after the first one are unlikely to give
> you results which are meaningful to the actual production situation.
>
> Cheers,
>
> Jeff
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2012-02-10 01:29:10 | Re: random_page_cost = 2.0 on Heroku Postgres |
Previous Message | Alessandro Gagliardi | 2012-02-09 20:38:22 | Re: timestamp with time zone |