From: | Kevin Grittner <kgrittn(at)ymail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Carson, Leonard" <lcarson(at)sdsc(dot)edu> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: views much slower in 9.3 than 8.4 |
Date: | 2015-03-30 14:46:04 |
Message-ID: | 1760411806.1238053.1427726764629.JavaMail.yahoo@mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> But the other problem is that the planner considers less-than-1%
> differences in cost estimates to be "in the noise", which means
> that it's not going to consider cost differences of less than
> 1480 units in the remaining join steps to be significant. This
> is how come we end up with the apparently brain-dead decisions to
> use seqscans on some of the other tables such as "pi" and "ac":
> comparing the seqscan to a potential inner indexscan, the total
> cost of the join is "the same" according to the 1% rule,
The 1% rule itself might be something to add to the R&D list. I
have seen it cause big problems in production, although the users
in that case had made a mistake which significantly contributed to
it being an issue. They had used the enable_seqscan = off setting
for one query which they had been unable to wrestle into good
performance in other ways, but accidentally neglected to turn it
back on after that query. Now, seqscans were rarely a good idea
with their permanent tables, but they had a couple queries which
used very small temporary tables with no indexes. It chose the
seqscan in spite of the setting; but, when run with seqscans off,
that gave all candidate plans such a high cost that they all looked
"equal" and the tie-breaker logic picked a horrible one. (The
faster plans did have lower cost, but not by enough to exceed the
1% threshold.) Now, had they not made a questionable choice in
disabling seqscan in production, compounded by an error in not
turning it off again, they would not have had their main web
application slow to unusable levels at times -- but it seems to me
that it might be reasonable to have some absolute cost maximum
difference test that needs to be met in addition to the percentage
difference, as kind of a "safety" on this foot-gun.
I'm not sold on this as being a good idea, and had not been
planning on raising it without further research; but since it plays
into this other scenario it seems worth mentioning as material for
potential R&D.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2015-03-30 15:52:09 | Re: views much slower in 9.3 than 8.4 |
Previous Message | Tom Lane | 2015-03-28 17:14:04 | Re: views much slower in 9.3 than 8.4 |