From: | Peter van Hardenberg <pvh(at)pvh(dot)ca> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
Date: | 2012-02-09 02:47:50 |
Message-ID: | CAAcg=kUDs62oSpkra1xc=T_GGL1prKXEg2Lwz5xZA9ej0KUj7A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
>> That said, I have access to a very large fleet in which to can collect
>> data so I'm all ears for suggestions about how to measure and would
>> gladly share the results with the list.
>
> I wonder if some kind of script that grabbed random queries and ran
> them with explain analyze and various random_page_cost to see when
> they switched and which plans are faster would work?
We aren't exactly in a position where we can adjust random_page_cost
on our users' databases arbitrarily to see what breaks. That would
be... irresponsible of us.
How would one design a meta-analyzer which we could run across many
databases and collect data? Could we perhaps collect useful
information from pg_stat_user_indexes, for example?
-p
--
Peter van Hardenberg
San Francisco, California
"Everything was beautiful, and nothing hurt." -- Kurt Vonnegut
From | Date | Subject | |
---|---|---|---|
Next Message | Peter van Hardenberg | 2012-02-09 02:54:10 | Re: random_page_cost = 2.0 on Heroku Postgres |
Previous Message | Scott Marlowe | 2012-02-09 02:28:12 | Re: random_page_cost = 2.0 on Heroku Postgres |