From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Claudio Freire <klaussfreire(at)gmail(dot)com>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Shouldn't we have a way to avoid "risky" plans? |
Date: | 2011-03-24 00:05:12 |
Message-ID: | 4D8A8AB8.7040401@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> If the planner starts operating on the basis of worst case rather than
> expected-case performance, the complaints will be far more numerous than
> they are today.
Yeah, I don't think that's the way to go. The other thought I had was
to accumulate a "risk" stat the same as we accumulate a "cost" stat.
However, I'm thinking that I'm overengineering what seems to be a fairly
isolated problem, in that we might simply need to adjust the costing on
this kind of a plan.
Also, can I say that the cost figures in this plan are extremely
confusing? Is it really necessary to show them the way we do?
Merge Join (cost=29.16..1648.00 rows=382 width=78) (actual
time=57215.167..57215.216 rows=1 loops=1)
Merge Cond: (rn.node_id = device_nodes.node_id)
-> Nested Loop (cost=0.00..11301882.40 rows=6998 width=62) (actual
time=57209.291..57215.030 rows=112 loops=1)
Join Filter: (node_ep.node_id = rn.node_id)
-> Nested Loop (cost=0.00..11003966.85 rows=90276 width=46)
(actual time=0.027..52792.422 rows=90195 loops=1)
The first time I saw the above, I thought we had some kind of glibc math
bug on the host system. Costs are supposed to accumulate upwards.
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
From | Date | Subject | |
---|---|---|---|
Next Message | DM | 2011-03-24 02:04:21 | pg9.0.3 explain analyze running very slow compared to a different box with much less configuration |
Previous Message | Marti Raudsepp | 2011-03-23 21:56:16 | Re: Slow query on CLUTER -ed tables |