From: | Peter Geoghegan <pg(at)bowt(dot)ie> |
---|---|
To: | James Coleman <jtc331(at)gmail(dot)com> |
Cc: | PostgreSQL mailing lists <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: Less selective index chosen unexpectedly |
Date: | 2021-05-18 21:16:27 |
Message-ID: | CAH2-WznFSeXr34BFgUmfnr-Exv5EW8R6fiSGnfNjzcq6pw6eXw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Tue, May 18, 2021 at 1:36 PM James Coleman <jtc331(at)gmail(dot)com> wrote:
> The attached test case demonstrates the full problem at least some of the time--I've not been able to make it deterministic, but I'd say it shows the wrong plan choice (resulting in ~5k unnecessarily processed and filtered rows) roughly 2/3 of the time on my laptop against PG11 (what we're running on in production). Against ~master I able to reproduce the wrong plan choice, but not the large number of filtered rows until I modified the second INSERT's case statement to use "n.i > 5000" instead of "n.i < 25000" -- I assume this is due to some combination of index deduplication/suffix truncation. Interestingly with those changes it seems to be more deterministic against ~master than the original repro case against 11.
That's expected with 12 because the old "getting tired" behavior to
deal with pages full of duplicates in !heapkeyspace indexes adds
non-determinism (to every index tuple insertion involving an incoming
item that already has many duplicates in the index). I actually relied
on the determinism when I wrote my own index space utilization test
suite for the Postgres 12 and Postgres 13 work. With a little care you
can expect to get exactly the same index with a test case involving
serial inserts.
--
Peter Geoghegan
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2021-05-18 21:34:20 | Re: Less selective index chosen unexpectedly |
Previous Message | James Coleman | 2021-05-18 20:36:02 | Less selective index chosen unexpectedly |