From: | Masahiro Ikeda <ikedamsh(at)oss(dot)nttdata(dot)com> |
---|---|
To: | Peter Geoghegan <pg(at)bowt(dot)ie> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, Masahiro(dot)Ikeda(at)nttdata(dot)com, pgsql-hackers(at)lists(dot)postgresql(dot)org, Masao(dot)Fujii(at)nttdata(dot)com |
Subject: | Re: Adding skip scan (including MDAM style range skip scan) to nbtree |
Date: | 2024-11-25 09:07:39 |
Message-ID: | a1f3eb827673420516776b5e43091f42@oss.nttdata.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2024-11-23 07:34, Peter Geoghegan wrote:
> On Fri, Nov 22, 2024 at 4:17 AM Masahiro Ikeda
> <ikedamsh(at)oss(dot)nttdata(dot)com> wrote:
>> Though the change fixes the assertion error in 'make check', there are
>> still
>> cases where the number of return values is incorrect. I would also
>> like
>> to
>> continue investigating.
>
> Thanks for taking a look at that! I've come up with a better approach,
> though (sorry about how quickly this keeps changing!). See the
> attached new revision, v17.
>
> I'm now calling the new optimization from the third patch the
> "skipskip" optimization. I believe that v17 will fix those bugs that
> you saw -- let me know if those are gone now. It also greatly improves
> the situation with regressions (more on that later).
>
> New approach
> ------------
>
> Before now, I was trying to preserve the usual invariants that we have
> for the scan's array keys: the array keys must "track the progress" of
> the scan through the index's key space -- that's what those
> _bt_tuple_before_array_skeys precondition and postcondition asserts
> inside _bt_advance_array_keys verify for us (these assertions have
> proven very valuable, both now and during the earlier Postgres 17
> nbtree SAOP project). My original approach (to managing the overhead
> of maintaining skip arrays in adversarial/unsympathetic cases) was
> overly complicated, inflexible, and brittle.
>
> It's simpler (much simpler) to allow the scan to temporarily forget
> about the invariants: once the "skipskip" optimization is activated,
> we don't care about the rules that require that "the array keys track
> progress through the key space" -- not until _bt_readpage actually
> reaches the page's finaltup. Then _bt_readpage can handle the problem
> using a trick that we already use elsewhere (e.g., in btrestrpos):
> _bt_readpage just calls _bt_start_array_keys to get the array keys to
> their initial positions (in the current scan direction), before
> calling _bt_checkkeys for finaltup in the usual way.
>
> Under this scheme, the _bt_checkkeys call for finaltup will definitely
> call _bt_advance_array_keys to advance the array keys to the correct
> place (the correct place when scanning forward is ">= finaltup in the
> key space"). The truly essential thing is that we never accidentally
> allow the array keys to "prematurely get ahead of the scan's tuple in
> the keyspace" -- that leads to wrong answers to queries once we reach
> the next page. But the arrays can be allowed to temporarily remain
> *before* the scan's tuples without consequence (it's safe when the
> skipskip optimization is in effect, at least, since the _bt_checkkeys
> calls treat everything as a non-required key, and so
> _bt_checkkeys/_bt_advance_array_keys will never expect any non-skipped
> SAOP arrays to tells them anything about the scan's progress through
> the index's key space -- there'll be no unsafe "cur_elem_trig" binary
> searches, for example).
>
> This approach also allowed me to restore all of the assertions in
> nbtutils.c to their original form. That was important to me -- those
> assertions have saved me quite a few times.
>
> Regressions, performance improvements
> -------------------------------------
>
> As I touched on briefly already, the new approach is also
> significantly *faster* than the master branch in certain "adversarial"
> cases (this is explained in the next paragraph). Overall, all of the
> cases that were unacceptably regressed before now, that I know of
> (e.g., all of the cases that you brought to my attention so far,
> Masahiro) are either significantly faster, or only very slightly
> slower. The regressions that we still have in v17 are probably
> acceptable -- though clearly I still have a lot of performance
> validation work to do before reaching a final conclusion about
> regressions.
>
> I also attach a variant of your test_for_v15.sql test case, Masahiro.
> I used this to do some basic performance validation of this latest
> version of the patch -- it'll give you a general sense of where we are
> with regressions, and will also show those "adversarial" cases that
> end up significantly faster than the master branch with v17.
> Significant improvements are sometimes seen because the "skipskip"
> optimization replaces the requiredOppositeDirOnly optimization (see
> commit e0b1ee17dc for context). We can do better than the existing
> requiredOppositeDirOnly optimizations because we can skip more
> individual scan key comparisons. For example, with this query:
>
> SELECT * FROM t WHERE id1 BETWEEN 0 AND 1_000_000 AND id2 = 1_000_000
>
> This goes from taking ~25ms with a warm cache on master, to only
> taking ~17ms on v17 of the patch series. I wasn't really trying to
> make anything faster, here -- it's a nice bonus.
>
> There are 3 scan keys involved here, when the query is run on the
> master branch: "id1 >= 0", "id1 <= 1_000_000", and "id2 = 1_000_000".
> The existing requiredOppositeDirOnly optimization doesn't work because
> only one page will ever have its pstate->first set to 'true' within
> _bt_readpage -- that's due to "id2 = 1_000_000" (a non-required scan
> key) only rarely evaluating to true. Whereas with skip scan (and its
> "skipskip" optimization), there are only 2 scan keys (well, sort of):
> the range skip array scan key on "id1", plus the "id2 = 1_000_000"
> scan key. We'll be able to skip over the range skip array scan key
> entirely with the "skipskip" optimization, so the range skip array's
> lower_bound and upper_bound "subsidiary" scan keys won't need to be
> evaluated more than once per affected leaf page. In other words, we'll
> still need to evaluate the "id2 = 1_000_000" against every tuple on
> every leaf page -- but we don't need to use the >= or <=
> subsidiary-to-skip-array scan keys (except once per page, to establish
> that the optimization is safe).
Thanks for updating the patch!
The approach looks good to me. In fact, the significant regressions I
reported have disappeared in my environment. And the test_for_v17.sql
shows that the performance of the master and the master with your
patches
is almost same.
One thing I am concerned about is that it reduces the cases where the
optimization of _bt_checkkeys_look_ahead() works effectively, as the
approach
skips the skip immediately on the first occurrence per page. But, I'd
like to
take your approach because I prioritize stability.
FWIW, I conducted tests to understand the downside of 0003 patch. IIUC,
the worst-case scenario occurs when the first few tuples per page have
different values for the first attribute, while the rest are the same
value. The result
shows that the 0003 patch caused a 2x degradation in performance,
although the
performance is faster than that of the master branch.
* master with 0001, 0002 and 0003 patch.
-- SET skipscan_prefix_cols=32
Index Only Scan using t_idx on public.t (cost=0.42..3576.58 rows=2712
width=8) (actual time=0.019..15.107 rows=2717 loops=1)
Output: id1, id2
Index Cond: (t.id2 = 360)
Index Searches: 2696
Heap Fetches: 0
Buffers: shared hit=8126
Settings: effective_cache_size = '7932MB', work_mem = '15MB'
Planning Time: 0.049 ms
Execution Time: 15.203 ms
(9 rows)
* master with 0001 and 0002 patch. (without 0003 patch)
-- SET skipscan_prefix_cols=32
Index Only Scan using t_idx on public.t (cost=0.42..3576.55 rows=2709
width=8) (actual time=
0.011..6.886 rows=2717 loops=1)
Output: id1, id2
Index Cond: (t.id2 = 360)
Index Searches: 2696
Heap Fetches: 0
Buffers: shared hit=8126
Settings: effective_cache_size = '7932MB', work_mem = '15MB'
Planning Time: 0.062 ms
Execution Time: 6.981 ms
(9 rows)
* the way to get the above result.
-- create table
DROP TABLE IF EXISTS t;
CREATE unlogged TABLE t (id1 int, id2 int);
-- A special case where the 0003 patch performs poorly.
-- It inserts 367 index tuples per page, making only the first two id1
-- values different, and then makes the rest the same.
-- psql=# SELECT * FROM bt_page_stats('t_idx', 1);
-- -[ RECORD 1 ]-+-----
-- blkno | 1
-- type | l
-- live_items | 367
-- dead_items | 0
-- avg_item_size | 16
-- page_size | 8192
-- free_size | 808
-- btpo_prev | 0
-- btpo_next | 2
-- btpo_level | 0
-- btpo_flags | 1
INSERT INTO t (
SELECT CASE WHEN i%368<3 THEN i%368+i/368*100 ELSE i/368*1000+10 END,
i%368
FROM generate_series(1, 1_000_000) s(i)
);
CREATE INDEX t_idx on t (id1, id2);
VACUUM FREEZE ANALYZE;
-- example data
SELECT * FROM t LIMIT 10;
SELECT * FROM t WHERE id2 > 365 LIMIT 10;
-- performance
SET skipscan_prefix_cols=32;
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT *
FROM t WHERE id2 = 360; -- cache
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT *
FROM t WHERE id2 = 360;
SET skipscan_prefix_cols=0;
EXPLAIN (ANALYZE, BUFFERS, FORMAT TEXT, SETTINGS, WAL, VERBOSE) SELECT *
FROM t WHERE id2 = 360;
Again, the above results are provided for reference, as I believe that
many users prioritize stability and I'd like to take your new approach.
Regards,
--
Masahiro Ikeda
NTT DATA CORPORATION
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2024-11-25 09:49:31 | Re: Conflict detection for update_deleted in logical replication |
Previous Message | ro b | 2024-11-25 08:58:04 | Self contradictory examining on rel's baserestrictinfo |