From: | Peter Geoghegan <peter(at)2ndquadrant(dot)com> |
---|---|
To: | Peter van Hardenberg <pvh(at)pvh(dot)ca> |
Cc: | Joshua Berkus <josh(at)agliodbs(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, pgsql-performance(at)postgresql(dot)org, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
Subject: | Re: random_page_cost = 2.0 on Heroku Postgres |
Date: | 2012-02-12 23:37:14 |
Message-ID: | CAEYLb_WvD6gyibab7w=tCF4dQ7qD5AQjxGF348gZJM+r=oNhJQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 12 February 2012 22:28, Peter van Hardenberg <pvh(at)pvh(dot)ca> wrote:
> Yes, I think if we could normalize, anonymize, and randomly EXPLAIN
> ANALYZE 0.1% of all queries that run on our platform we could look for
> bad choices by the planner. I think the potential here could be quite
> remarkable.
Tom Lane suggested that plans, rather than the query tree, might be a
more appropriate thing for the new pg_stat_statements to be hashing,
as plans should be directly blamed for execution costs. While I don't
think that that's appropriate for normalisation (consider that there'd
often be duplicate pg_stat_statements entries per query), it does seem
like an idea that could be worked into a future revision, to detect
problematic plans. Maybe it could be usefully combined with
auto_explain or something like that (in a revision of auto_explain
that doesn't necessarily explain every plan, and therefore doesn't pay
the considerable overhead of that instrumentation across the board).
--
Peter Geoghegan http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
From | Date | Subject | |
---|---|---|---|
Next Message | CSS | 2012-02-13 21:49:38 | Re: rough benchmarks, sata vs. ssd |
Previous Message | Peter van Hardenberg | 2012-02-12 22:28:27 | Re: random_page_cost = 2.0 on Heroku Postgres |