From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Maksym Boguk <maxim(dot)boguk(at)gmail(dot)com> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #6278: Index scans on '>' condition on field with many NULLS |
Date: | 2011-10-31 13:48:26 |
Message-ID: | CA+TgmoaF9uU0BGgLsje8Mz0AdgJ1=1Znyqbq3xewvz-S0dEHYg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Sun, Oct 30, 2011 at 11:39 PM, Maksym Boguk <maxim(dot)boguk(at)gmail(dot)com> wrote:
> However very selective index scan on '>' condition can work pretty
> inefficient on column with many nulls.
> (in the same time '<' work well).
>
> Seems index scan on '>' condition going through all nulls in index.
>
> Test case (tested on 8.4 and 9.0 with same effect):
>
> postgres=# CREATE table test as select (case when random()>0.1 then NULL
> else random() end) as value from generate_series(1,10000000);
> SELECT 10000000
> postgres=# CREATE INDEX test_value_key on test(value);
> CREATE INDEX
> postgres=# SELECT count(*) from test;
> count
> ----------
> 10000000
> (1 row)
>
> postgres=# VACUUM ANALYZE test;
> VACUUM
>
> postgres=# EXPLAIN ANALYZE select * from test where value>0.9999;
> QUERY PLAN
> ----------------------------------------------------------------------------
> -----------------------------------------------
> Index Scan using test_value_key on test (cost=0.00..13.78 rows=105
> width=8) (actual time=0.010..155.318 rows=91 loops=1)
> Index Cond: (value > 0.9999::double precision)
> Total runtime: 155.346 ms
> (3 rows)
>
> Oops... 160ms to return 90 rows from memory.
>
> In the same time 100 rows from other index side:
>
> postgres=# EXPLAIN ANALYZE select * from test where value<0.0001;
> QUERY PLAN
> ----------------------------------------------------------------------------
> ----------------------------------------------
> Index Scan using test_value_key on test (cost=0.00..15.69 rows=120
> width=8) (actual time=0.006..0.158 rows=103 loops=1)
> Index Cond: (value < 0.0001::double precision)
> Total runtime: 0.175 ms
> (3 rows)
>
> That is good result (1000 faster then other way around).
>
> For sure that can be fixed via create index with NOT NULL predicated. But
> may be that problem worth small investigation.
>
> Seems index scan cannot stop after finding first NULL during scan on '>'
> condition, and doing scan through all 90% nulls in table.
I can reproduce this. I'm not sure whether it's a bug either, but it
sure seems less than ideal. I suppose the problem is that we are
generating an index scan that starts at 0.9999 and runs through the
end of the index, rather than stopping when it hits the first NULL.
Not sure how much work it would be to make that happen, but I guess
we'd need a second branch to the index condition to stop the scan,
just as we already do for:
EXPLAIN (ANALYZE, BUFFERS) select * from test where value>0.9993 and
value <0.9999;
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-10-31 14:37:38 | Re: BUG #6278: Index scans on '>' condition on field with many NULLS |
Previous Message | Robert Haas | 2011-10-31 13:40:35 | Re: BUG #6274: documentation on pg_attribute.atttypmod |