From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Peter van Hardenberg <pvh(at)pvh(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
Date: | 2012-02-09 15:32:19 |
Message-ID: | CAMkU=1xnK+HiW4cZS_qy0vMJeDXFgf9+XDuz0uVX+iNCukejqA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
>> Having read the thread, I don't really see how I could study what a
>> more principled value would be.
>
> Agreed. Just pointing out more research needs to be done.
>
>> That said, I have access to a very large fleet in which to can collect
>> data so I'm all ears for suggestions about how to measure and would
>> gladly share the results with the list.
>
> I wonder if some kind of script that grabbed random queries and ran
> them with explain analyze and various random_page_cost to see when
> they switched and which plans are faster would work?
But if you grab a random query and execute it repeatedly, you
drastically change the caching.
Results from any execution after the first one are unlikely to give
you results which are meaningful to the actual production situation.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Alessandro Gagliardi | 2012-02-09 18:42:15 | timestamp with time zone |
Previous Message | Frank Lanitz | 2012-02-09 13:28:35 | Re: Inserts or Updates |