From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | k(dot)joe(at)freemail(dot)hu |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: pg 9.3 exists subselect with limit brakes query plan |
Date: | 2014-03-18 01:18:28 |
Message-ID: | 14244.1395105508@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
=?ISO-8859-2?Q?K=F3sz=F3_J=F3zsef?= <k(dot)joe(at)freemail(dot)hu> writes:
> We plan to upgrade from PostgreSQL 8.3 to 9.3 and experience some
> planner-related problems in our applications with subselects using
> EXISTS and LIMIT keywords.
I'd drop the LIMIT clauses if I were you. There once were PG versions
that were too dumb to know that an EXISTS only fetches one row, but that
was a long time ago. It's unlikely that LIMIT will make things better
except by accident.
The immediate issue here seems to be a variant of the LIMIT-is-hard-to-
predict theme. On my machine, the subplan for the exists looks like
-> Limit (cost=0.00..0.29 rows=1 width=4) (actual time=63.037..63.037 rows=1 loops=10)
-> Seq Scan on pgbench_accounts a (cost=0.00..28894.00 rows=100000 width=4) (actual time=63.034..63.034 rows=1 loops=10)
Filter: (bid = b.bid)
Rows Removed by Filter: 360115
while if I set enable_seqscan = off I get
-> Limit (cost=0.42..0.45 rows=1 width=4) (actual time=0.027..0.027 rows=1 loops=10)
-> Index Only Scan using pgbench_accounts_bid on pgbench_accounts a (cost=0.42..2850.43 rows=100000 width=4) (actual time=0.026..0.026 rows=1 loops=10)
Index Cond: (bid = b.bid)
Heap Fetches: 0
So the seqscan+limit has a marginally lower predicted cost and gets
chosen. But that predicted cost assumes that the bid values are uniformly
distributed in the table (implying that the seqscan only has to visit
maybe half a dozen rows to find a match). In this rather artificial test
case, they're exactly sequential, so that for larger bid values, a lot of
rows have to be traversed.
There's been talk of incorporating some kind of risk assessment in plan
costing, which might be able to identify this type of problem and avoid
the unstable plan. But we don't have it yet.
Another solution you might consider for this particular problem is to
reduce random_page_cost a bit to make the indexscan look cheaper. But I
don't know if that would help for whatever your production problem is.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2014-03-18 01:21:19 | Re: BUG #9606: pg_class relhaspkey column not updated on removal of primary key |
Previous Message | Jeff Frost | 2014-03-18 00:48:36 | Re: BUG #9606: pg_class relhaspkey column not updated on removal of primary key |