From: | Игорь Выскорко <vyskorko(dot)igor(at)yandex(dot)ru> |
---|---|
To: | Andrei Zhidenkov <andrei(dot)zhidenkov(at)n26(dot)com> |
Cc: | Igor Neyman <ineyman(at)perceptron(dot)com>, "zzzzz(dot)graf(at)gmail(dot)com" <zzzzz(dot)graf(at)gmail(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Weird seqscan node plan |
Date: | 2019-11-27 10:32:51 |
Message-ID: | 41127571574850771@sas2-48c24a2076e0.qloud-c.yandex.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
27.11.2019, 15:42, "Andrei Zhidenkov" <andrei(dot)zhidenkov(at)n26(dot)com>:
> At this point I disagree. It’s faster to fetch one row using seq scan that using index scan as well as fetching number of consecutive rows is faster via seq scan. Index scan is not always faster.
>
Yes, you are right in common: Index scan is not always faster.
But in my current case I have table with ~8k tuples (309 relpages) and to find 1 row in the worst case (when this row in the last page) we need 309 heap fetches.
For the same table to find one unique tuple in index we need about 4 (not sure about this number) index fetches and 1 heap fetch. That's why I decided that index scan is faster.
When I was thinking about your point I looked at pg_class table to determine relpages for both index and table. Index had more than 700 pages... Index bloat?
So, I dropped and recreated index. Now it takes only 33 pages. And yes, my plan is now using index only scan
Ok, last question here - even with bloated index overall number of index fetches must be much lower than 309. Am I wrong?
From | Date | Subject | |
---|---|---|---|
Next Message | Игорь Выскорко | 2019-11-27 11:05:33 | Re: Range contains element filter not using index of the element column |
Previous Message | Dmytro Zhluktenko | 2019-11-27 09:54:13 | RE: Re: Postgres Full Text Search Jsonb Array column does not search for first row |