Re: Finding cause of test fails on the cfbot site

From: Thomas Munro <thomas(dot)munro(at)gmail(dot)com>
To: Andrew Dunstan <andrew(at)dunslane(dot)net>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Peter Smith <smithpb2250(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: Finding cause of test fails on the cfbot site
Date: 2021-02-17 20:42:04
Message-ID: CA+hUKGLPBXo7FFUgitTOBwMn7vdR4zp0N-VTA9PbeBwR_NPVHA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, Feb 18, 2021 at 9:18 AM Andrew Dunstan <andrew(at)dunslane(dot)net> wrote:
> On 2/17/21 11:06 AM, Tom Lane wrote:
> > Peter Smith <smithpb2250(at)gmail(dot)com> writes:
> >> I saw that one of our commitfest entries (32/2914) is recently
> >> reporting a fail on the cfbot site [1]. I thought this was all ok a
> >> few days ago.
> >> ...
> >> Is there any other detailed information available anywhere, e.g.
> >> logs?, which might help us work out what was the cause of the test
> >> failure?
> > AFAIK the cfbot doesn't capture anything beyond the session typescript.
> > However, this doesn't look that hard to reproduce locally ... have you
> > tried, using similar configure options to what that cfbot run did?
> > Once you did reproduce it, there'd be logs under
> > contrib/test_decoding/tmp_check/.
>
> yeah. The cfbot runs check-world which makes it difficult for it to know
> which log files to show when there's an error. That's a major part of
> the reason the buildfarm runs a much finer grained set of steps.

Yeah, it's hard to make it print out just the right logs without
dumping so much stuff that it's hard to see the wood for the trees;
perhaps if the Makefile had an option to dump relevant stuff for the
specific tests that failed, or perhaps the buildfarm is already better
at that and cfbot should just use the buildfarm client directly. Hmm.
Another idea would be to figure out how to make a tarball of all log
files that you can download for inspection with better tools at home
when things go wrong. It would rapidly blow through the 1GB limit for
stored "artefacts" on open source/community Cirrus accounts though, so
we'd need to figure out how to manage retention carefully.

For what it's worth, I tried to reproduce this particular on a couple
of systems, many times, with no luck. It doesn't look like a freak CI
failure (there have been a few random terminations I can't explain
recently, but they look different, I think there was a Google Compute
Engine outage that might explain that), and it failed in exactly the
same way on Linux and FreeBSD. I tried locally on FreeBSD, on top of
commit a975ff4980d60f8cbd8d8cbcff70182ea53e787a (which is what the
last cfbot run did), because it conflicts with a recent change so it
doesn't apply on the tip of master right now.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message David Rowley 2021-02-17 20:45:19 Re: Tid scan improvements
Previous Message Bossart, Nathan 2021-02-17 20:30:46 Re: documentation fix for SET ROLE