From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-committers <pgsql-committers(at)postgresql(dot)org> |
Subject: | Re: pgsql: Fix bogus size calculation introduced by commit cc5f81366. |
Date: | 2017-09-18 00:49:06 |
Message-ID: | CAB7nPqTLQWE6X6oBg-0XGci6LoxfKZcBvFZ7YKDfrFZqPeG_hg@mail.gmail.com |
Views: | Whole Thread | Raw Message | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-committers |
On Mon, Sep 18, 2017 at 6:58 AM, Thomas Munro
<thomas(dot)munro(at)enterprisedb(dot)com> wrote:
> While googling around trying to find where I could read Coverity's
> output myself I was intrigued to see that https://scan.coverity.com
> offers integration with Travis CI[1], which suggests the possibility
> of automatically scanning all Commitfest submissions. The trouble is
> that for projects over 1 million lines of code they limit scans to one
> per day, so it'd take over 200 days to get through the current
> Commitfest, assuming no one ever posted new versions or committed
> anything in the meantime. Hah. I guess Coverity analysis is going to
> have to remain post-commit only.
There are a mountain of false positives to take care of when doing the
initial scanning of a new project. So while the initial cost is high,
this would be maintainable in the long term if there is a continuous
effort put into it. The limit due to the project size sucks, but at
least this lets us know that coverity is not a solution for the CF.
Careful review is able to remove most of the problems anyway.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2017-09-18 02:33:29 | pgsql: Fix DROP SUBSCRIPTION hang |
Previous Message | Peter Geoghegan | 2017-09-17 22:22:18 | Re: pgsql: Fix transient mdsync() errors of truncated relations due to 72a9 |