From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Oleg Bartunov <oleg(at)sai(dot)msu(dot)su>, Teodor Sigaev <teodor(at)sigaev(dot)ru> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org, "Marc Mamin" <M(dot)Mamin(at)intershop(dot)de> |
Subject: | Qual evaluation cost estimates for GIN indexes |
Date: | 2012-02-16 23:15:25 |
Message-ID: | 27953.1329434125@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I looked into the complaint here of poor estimation for GIN indexscans:
http://archives.postgresql.org/pgsql-performance/2012-02/msg00028.php
At first glance it sounds like a mistake in selectivity estimation,
but it isn't: the rowcount estimates are pretty nearly dead on.
The problem is in the planner's estimate of the cost of executing the
@@ operator. We have pg_proc.procost set to 1 for ts_match_vq, but
actually it's a good deal more expensive than that. Some
experimentation suggests that @@ might be about 500 times as expensive
as a simple integer comparison. I don't propose pushing its procost
up that much, but surely at least 10 would be appropriate, maybe even
100.
However ... if you just alter pg_proc.procost in Marc's example, the
planner *still* picks a seqscan, even though its estimate of the seqscan
cost surely does go up. The reason is that its estimate of the GIN
indexscan cost goes up just as much, since we charge one qual eval cost
per returned tuple in gincostestimate. It is easy to tell from the
actual runtimes that that is not what's happening in a GIN indexscan;
we are not re-executing the @@ operator for every tuple. But the
planner's cost model doesn't know that.
There are a couple of issues that would have to be addressed to make
this better:
1. If we shouldn't charge procost per row, what should we charge?
It's probably reasonable to assume that the primitive GIN-index-entry
comparison operations have cost equal to one cpu_operator_cost
regardless of what's assigned to the user-visible operators, but I'm
not convinced that that's sufficient to model complicated operators.
It might be okay to charge that much per index entry visited rather
than driving it off the number of heap tuples returned. The code in
gincostestimate goes to considerable lengths to estimate the number of
index pages fetched, and it seems like it might be able to derive the
number of index entries visited too, but it's not trying to account for
any CPU costs at the moment.
2. What about lossy operators, or lossy bitmap scans? If either of
those things happen, we *will* re-execute the @@ operator at visited
tuples, so discounting its high cost would be a mistake. But the
planner has no information about either effect, since we moved all
support for lossiness to runtime.
I think it was point #2 that led us to not consider these issues before.
But now that we've seen actual cases where the planner makes a poor
decision because it's not modeling this effect, I think we ought to try
to do something about it.
I haven't got time to do anything about this for 9.2, and I bet you
don't either, but it ought to be on the TODO list to try to improve
this.
BTW, an entirely different line of thought is "why on earth is @@ so
frickin expensive, when it's comparing already-processed tsvectors
with only a few entries to an already-processed tsquery with only one
entry??". This test case suggests to me that there's something
unnecessarily slow in there, and a bit of micro-optimization effort
might be well repaid.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2012-02-16 23:30:53 | Re: Qual evaluation cost estimates for GIN indexes |
Previous Message | Jay Levitt | 2012-02-16 22:56:11 | Re: Designing an extension for feature-space similarity search |