From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | Jacob Champion <jacob(dot)champion(at)enterprisedb(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: RFC: adding pytest as a supported test framework |
Date: | 2024-06-10 20:04:11 |
Message-ID: | 20240610200411.byj6sv2vpgol6wcf@awork3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
Just for context for the rest the email: I think we desperately need to move
off perl for tests. The infrastructure around our testing is basically
unmaintained and just about nobody that started doing dev stuff in the last 10
years learned perl.
On 2024-06-10 11:46:00 -0700, Jacob Champion wrote:
> 4. It'd be great to split apart client-side tests from server-side
> tests. Driving Postgres via psql all the time is fine for acceptance
> testing, but it becomes a big problem when you need to test how
> clients talk to servers with incompatible feature sets, or how a peer
> behaves when talking to something buggy.
That seems orthogonal to using pytest vs something else?
> == Why pytest? ==
>
> From the small and biased sample at the unconference session, it looks
> like a number of people have independently settled on pytest in their
> own projects. In my opinion, pytest occupies a nice space where it
> solves some of the above problems for us, and it gives us plenty of
> tools to solve the other problems without too much pain.
We might be able to alleviate that by simply abstracting it away, but I found
pytest's testrunner pretty painful. Oodles of options that are not very well
documented and that often don't work because they are very specific to some
situations, without that being explained.
> Problem 1 (rerun failing tests): One architectural roadblock to this
> in our Test::More suite is that tests depend on setup that's done by
> previous tests. pytest allows you to declare each test's setup
> requirements via pytest fixtures, letting the test runner build up the
> world exactly as it needs to be for a single isolated test. These
> fixtures may be given a "scope" so that multiple tests may share the
> same setup for performance or other reasons.
OTOH, that's quite likely to increase overall test times very
significantly. Yes, sometimes that can be avoided with careful use of various
features, but often that's hard, and IME is rarely done rigiorously.
> Problem 2 (seeing what failed): pytest does this via assertion
> introspection and very detailed failure reporting. If you haven't seen
> this before, take a look at the pytest homepage [1]; there's an
> example of a full log.
That's not really different than what the perl tap test stuff allows. We
indeed are bad at utilizing it, but I'm not sure that switching languages will
change that.
I think part of the problem is that the information about what precisely
failed is often much harder to collect when testing multiple servers
interacting than when doing localized unit tests.
I think we ought to invest a bunch in improving that, I'd hope that a lot of
that work would be largely independent of the language the tests are written
in.
> Python's standard library has lots of power by itself, with very good
> documentation. And virtualenvs and better package tooling have made it
> much easier, IMO, to avoid the XKCD dependency tangle [4] of the
> 2010s.
Ugh, I think this is actually python's weakest area. There's about a dozen
package managers and "python distributions", that are at best half compatible,
and the documentation situation around this is *awful*.
> When it comes to third-party packages, which I think we're
> probably going to want in moderation, we would still need to discuss
> supply chain safety. Python is not as mature here as, say, Go.
What external dependencies are you imagining?
> == A Plan ==
>
> Even if everyone were on board immediately, there's a lot of work to
> do. I'd like to add pytest in a more probationary status, so we can
> iron out the inevitable wrinkles. My proposal would be:
>
> 1. Commit bare-bones support in our Meson setup for running pytest, so
> everyone can kick the tires independently.
> 2. Add a test for something that we can't currently exercise.
> 3. Port a test from a place where the maintenance is terrible, to see
> if we can improve it.
>
> If we hate it by that point, no harm done; tear it back out. Otherwise
> we keep rolling forward.
I think somewhere between 1 and 4 a *substantial* amount of work would be
required to provide a bunch of the infrastructure that Cluster.pm etc
provide. Otherwise we'll end up with a lot of copy pasted code between tests.
Greetings,
Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Imseih (AWS), Sami | 2024-06-10 20:12:46 | Re: Track the amount of time waiting due to cost_delay |
Previous Message | Andres Freund | 2024-06-10 19:39:33 | Re: Proposal: Document ABI Compatibility |