From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Melanie Plageman <melanieplageman(at)gmail(dot)com> |
Cc: | Thomas Munro <thomas(dot)munro(at)gmail(dot)com>, Masahiko Sawada <sawada(dot)mshk(at)gmail(dot)com>, Tomas Vondra <tomas(at)vondra(dot)me>, Noah Misch <noah(at)leadboat(dot)com>, vignesh C <vignesh21(at)gmail(dot)com>, Andres Freund <andres(at)anarazel(dot)de>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, Nazir Bilal Yavuz <byavuz81(at)gmail(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, "Andrey M(dot) Borodin" <x4mmm(at)yandex-team(dot)ru> |
Subject: | Re: Confine vacuum skip logic to lazy_scan_skip |
Date: | 2025-02-18 15:52:04 |
Message-ID: | 1242389.1739893924@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Melanie Plageman <melanieplageman(at)gmail(dot)com> writes:
> On Sun, Feb 16, 2025 at 1:12 PM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> Basically, Coverity doesn't understand that a successful call to
>> read_stream_next_buffer must set per_buffer_data here. I don't
>> think there's much chance of teaching it that, so we'll just
>> have to dismiss this item as "intentional, not a bug".
> Is this easy to do? Like is there a list of things from coverity to ignore?
Their website has a table of live issues, and we can just mark this
one "dismissed". I'm not entirely sure how they recognize dismissed
issues --- it's not perfect, because old complaints tend to get
resurrected after changes in nearby code. But it's good enough.
>> I do have a suggestion: I think the "per_buffer_data" variable
>> should be declared inside the "while (true)" loop not outside.
> Done and pushed. Thanks!
Thanks, looks better now.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Sagar Shedge | 2025-02-18 15:52:57 | Re: Extend postgres_fdw_get_connections to return remote backend pid |
Previous Message | Tomas Vondra | 2025-02-18 15:44:05 | Re: Adjusting hash join memory limit to handle batch explosion |