| From: | Josh Berkus <josh(at)agliodbs(dot)com> |
|---|---|
| To: | Peter van Hardenberg <pvh(at)pvh(dot)ca> |
| Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
| Date: | 2012-02-10 19:32:50 |
| Message-ID: | 4F3570E2.2070301@agliodbs.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> Hmm, perhaps we could usefully aggregate auto_explain output.
The other option is to take a statistical approach. After all, what you
want to do is optimize average response times across all your user's
databases, not optimize for a few specific queries.
So one thought would be to add in pg_stat_statements to your platform
... something I'd like to see Heroku do anyway. Then you can sample
this across dozens (or hundreds) of user databases, each with RPC set to
a slightly different level, and aggregate it into a heat map.
That's the way I'd do it, anyway.
--
Josh Berkus
PostgreSQL Experts Inc.
http://pgexperts.com
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Cédric Villemain | 2012-02-10 19:48:07 | Re: random_page_cost = 2.0 on Heroku Postgres |
| Previous Message | Claudio Freire | 2012-02-10 17:12:11 | Re: Performance on large, append-only tables |