From: | Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Question about (probably wrong) index scan cost for conditional indexes |
Date: | 2012-01-23 00:46:18 |
Message-ID: | CAK-MWwTg+bypadBTG-HNbR3hcDHC8sTCNZyV1cU19MD96as=nA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Jan 23, 2012 at 11:28 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com> writes:
> > But it seems that index scan cost for very narrow/selective conditional
> > indexes is greatly overestimated at least in some cases.
>
> I realized in connection with
> http://archives.postgresql.org/pgsql-general/2012-01/msg00459.php
> that btcostestimate is not correctly estimating numIndexTuples for
> partial indexes. But it's impossible to tell from this amount of
> information whether you're seeing an effect of that, or something else.
> Can you provide a self-contained test case?
>
> regards, tom lane
>
Prorably simpliest test case:
set random_page_cost to 4;
set seq_page_cost to 1;
drop table if exists test;
CREATE TABLE test (id integer primary key, value1 float, value2 float,
value3 float, value4 float);
INSERT into test select id,random() as value1,random() as value2, random()
as value3,random() as value4 from generate_series(1,1000000) as g(id);
CREATE INDEX test_special_key on test(value1) where value2*2<0.01 and
value3*2<0.01 and value4*2<0.01;
ANALYZE test;
postgres=# EXPLAIN ANALYZE select * from test order by id limit 100;
QUERY PLAN
-----------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..3.43 rows=100 width=36) (actual time=0.042..0.170
rows=100 loops=1)
-> Index Scan using test_pkey on test (cost=0.00..34317.36
rows=1000000 width=36) (actual time=0.040..0.108 rows=100 loops=1)
Total runtime: 0.243 ms
(3 rows)
vs
postgres=# EXPLAIN ANALYZE select * from test where value2*2<0.01 and
value3*2<0.01 and value4*2<0.01 order by value1 limit 100;
QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------------------
Limit (cost=0.00..92.52 rows=100 width=36) (actual time=0.072..0.072
rows=0 loops=1)
-> Index Scan using test_special_key on test (cost=0.00..34264.97
rows=37037 width=36) (actual time=0.070..0.070 rows=0 loops=1)
Total runtime: 0.113 ms
(3 rows)
cost difference:
(cost=0.00..3.43 rows=100 width=36)
vs
(cost=0.00..92.52 rows=100 width=36)
An actual speed (and theoretical performance) almost same.
More selective conditions added to conditional index - worse situation with
wrong costing.
Kind Regards,
Maksym
--
Maxim Boguk
Senior Postgresql DBA.
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
Skype: maxim.boguk
Jabber: maxim(dot)boguk(at)gmail(dot)com
LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"
МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."
From | Date | Subject | |
---|---|---|---|
Next Message | dwnoon | 2012-01-23 00:52:11 | Re: hash options |
Previous Message | Tom Lane | 2012-01-23 00:28:37 | Re: Question about (probably wrong) index scan cost for conditional indexes |