From: | Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com> |
---|---|
To: | Melanie Plageman <melanieplageman(at)gmail(dot)com> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Andres Freund <andres(at)anarazel(dot)de>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com> |
Subject: | Re: BitmapHeapScan streaming read user and prelim refactoring |
Date: | 2024-04-04 14:35:45 |
Message-ID: | 95d9471f-ead4-4215-b739-f6d6f4c19497@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 4/4/24 00:57, Melanie Plageman wrote:
> On Sun, Mar 31, 2024 at 11:45:51AM -0400, Melanie Plageman wrote:
>> On Fri, Mar 29, 2024 at 12:05:15PM +0100, Tomas Vondra wrote:
>>>
>>>
>>> On 3/29/24 02:12, Thomas Munro wrote:
>>>> On Fri, Mar 29, 2024 at 10:43 AM Tomas Vondra
>>>> <tomas(dot)vondra(at)enterprisedb(dot)com> wrote:
>>>>> I think there's some sort of bug, triggering this assert in heapam
>>>>>
>>>>> Assert(BufferGetBlockNumber(hscan->rs_cbuf) == tbmres->blockno);
>>>>
>>>> Thanks for the repro. I can't seem to reproduce it (still trying) but
>>>> I assume this is with Melanie's v11 patch set which had
>>>> v11-0016-v10-Read-Stream-API.patch.
>>>>
>>>> Would you mind removing that commit and instead applying the v13
>>>> stream_read.c patches[1]? v10 stream_read.c was a little confused
>>>> about random I/O combining, which I fixed with a small adjustment to
>>>> the conditions for the "if" statement right at the end of
>>>> read_stream_look_ahead(). Sorry about that. The fixed version, with
>>>> eic=4, with your test query using WHERE a < a, ends its scan with:
>>>>
>>>
>>> I'll give that a try. Unfortunately unfortunately the v11 still has the
>>> problem I reported about a week ago:
>>>
>>> ERROR: prefetch and main iterators are out of sync
>>>
>>> So I can't run the full benchmarks :-( but master vs. streaming read API
>>> should work, I think.
>>
>> Odd, I didn't notice you reporting this ERROR popping up. Now that I
>> take a look, v11 (at least, maybe also v10) had this very sill mistake:
>>
>> if (scan->bm_parallel == NULL &&
>> scan->rs_pf_bhs_iterator &&
>> hscan->pfblockno > hscan->rs_base.blockno)
>> elog(ERROR, "prefetch and main iterators are out of sync");
>>
>> It errors out if the prefetch block is ahead of the current block --
>> which is the opposite of what we want. I've fixed this in attached v12.
>>
>> This version also has v13 of the streaming read API. I noticed one
>> mistake in my bitmapheap scan streaming read user -- it freed the
>> streaming read object at the wrong time. I don't know if this was
>> causing any other issues, but it at least is fixed in this version.
>
> Attached v13 is rebased over master (which includes the streaming read
> API now). I also reset the streaming read object on rescan instead of
> creating a new one each time.
>
> I don't know how much chance any of this has of going in to 17 now, but
> I thought I would start looking into the regression repro Tomas provided
> in [1].
>
My personal opinion is that we should try to get in as many of the the
refactoring patches as possible, but I think it's probably too late for
the actual switch to the streaming API.
If someone else feels like committing that part, I won't stand in the
way, but I'm not quite convinced it won't cause regressions. Maybe it's
OK but I'd need more time to do more tests, collect data, and so on. And
I don't think we have that, especially considering we'd still need to
commit the other parts first.
> I'm also not sure if I should try and group the commits into fewer
> commits now or wait until I have some idea of whether or not the
> approach in 0013 and 0014 is worth pursuing.
>
You mean whether to pursue the approach in general, or for v17? I think
it looks like the right approach, but for v17 see above :-(
As for merging, I wouldn't do that. I looked at the commits and while
some of them seem somewhat "trivial", I really like how you organized
the commits, and kept those that just "move" code around, and those that
actually change stuff. It's much easier to understand, IMO.
I went through the first ~10 commits, and added some review - either as
a separate commit, when possible, in the code as XXX comment, and also
in the commit message. The code tweaks are utterly trivial (whitespace
or indentation to make the line shorter). It shouldn't take much time to
deal with those, I think.
I think the main focus should be updating the commit messages. If it was
only a single patch, I'd probably try to write the messages myself, but
with this many patches it'd be great if you could update those and I'll
review that before commit.
I always struggle with writing commit messages myself, and it takes me
ages to write a good one (well, I think the message is good, but who
knows ...). But I think a good message should be concise enough to
explain what and why it's done. It may reference a thread for all the
gory details, but the basic reasoning should be in the commit message.
For example the message for "BitmapPrefetch use prefetch block recheck
for skip fetch" now says that it "makes more sense to do X" but does not
really say why that's the case. The linked message does, but it'd be
good to have that in the message (because how would I know how much of
the thread to read?).
Also, it'd be very helpful if you could update the author & reviewed-by
fields. I'll review those before commit, ofc, but I admit I lost track
of who reviewed which part.
I'd focus on the first ~8-9 commits or so for now, we can commit more if
things go reasonably well.
regards
--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2024-04-04 14:36:41 | Re: Building with musl in CI and the build farm |
Previous Message | Dmitry Dolgov | 2024-04-04 14:35:14 | Re: pg_stat_statements and "IN" conditions |