From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Peter van Hardenberg <pvh(at)pvh(dot)ca>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
Date: | 2012-02-12 19:49:33 |
Message-ID: | CAMkU=1zzx96FQssKixaBubYCWw6Msf1E-2C18HYPDuDfSBTqwA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Feb 9, 2012 at 5:29 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Thu, Feb 9, 2012 at 3:41 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
>> Hmm, perhaps we could usefully aggregate auto_explain output.
>
> How about something where you run a site at random_page cost of x,
> then y, then z and you do some aggregating of query times in each. A
> scatter plot should tell you lots.
Is there an easy and unintrusive way to get such a metric as the
aggregated query times? And to normalize it for how much work happens
to have been doing on at the time?
Without a good way to do normalization, you could just do lots of
tests with randomized settings, to average out any patterns in
workload, but that means you need an awful lot of tests to have enough
data to rely on randomization. But it would be desirable to do this
anyway, in case the normalization isn't as effective as we think.
But how long should each setting be tested for? If a different
setting causes certain index to start being used, then performance
would go down until those indexes get cached and then increase from
there. But how long is long enough to allow this to happen?
Thanks,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua Berkus | 2012-02-12 20:01:59 | Re: random_page_cost = 2.0 on Heroku Postgres |
Previous Message | Ofer Israeli | 2012-02-12 18:32:11 | Re: Inserts or Updates |