-
-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test sdist contents instead of Git checkout in CI #2541
base: main
Are you sure you want to change the base?
Test sdist contents instead of Git checkout in CI #2541
Conversation
We already test the sdist contents, see https://github.com/python-trio/trio/blob/master/ci.sh#L131-L148 which has the advantage of also working locally. I'll close preemptively but feel free to reopen. |
I had missed that you want to allow others to test using only the sdist, but including a random certificate in our sdist seems wrong even if that mean downstream won't be able to test LSP support. (Maybe others disagree, in which case I will be happy to reopen.) |
@pquentin well, the tests need to be usable if they are included in sdist. If you don't like the certificate, maybe we could exclude it and have some way to skip said tests, keeping the rest in a good shape. After all, the downstreams are usually Linux-based anyway. OTOH, if a “random certificate” is a prerequisite for tests, it's not really “random” but is a part of the testing setup. I'll reopen this for now to get more visibility. I firmly believe that sdists should ship everything necessary to run tests and be self-contained in general. Otherwise, an implicit dependency on Git may appear, and it won't be noticeable in many envs. |
In fact, this PR demonstrates that a part of the pre-requisites is indeed missing from the sdist. |
While I generally like the approach of testing the artifacts instead of the source, there's a basic hazard that I think this doesn't address. Or maybe I looked too quickly. What if some tests are just missing or get skipped etc? I would expect some verification that the artifact has all the proper tests compared to the source checkout. |
@altendky that's a good point. I usually use I've given some thought to what you said. My first instinct was to add a CI check that would retrieve the number of executed tests from the JUnit output that pytest produces and ensure that it's sufficiently big. Then, I remembered seeing a pytest plugin that allows specifying a minimum number of tests required to succeed for the test session to be successful. Never used this one personally, though. The more I thought about it, the more I realized that the solution is on the surface — the coverage metric would drop dramatically if the tests stop being executed. Specifically, the test directory should report 100% coverage. I recall earlier discussions in the community on whether to measure coverage on the tests themselves, and nedbat even has a blog post explaining that this is important. I've gone to https://app.codecov.io/gh/python-trio/trio/tree/master/trio/tests and, to my surprise, its coverage is at 96.03% — that's no good! Looking inside, I see that there's a few reasons that things are not covered:
Digging deeper, I see
This is from https://github.com/python-trio/trio/actions/runs/3958687617/jobs/6780554545#step:4:1660. There are some problems with how this is set up:
The solution would be simple — install Codecov GitHub App into the repo and make coverage required check. Configure coverage to demand 100% on the test directory, at least. Use the official up-to-date uploader (directly in the script, through the action in GHA). This will make sure that the tests are included and executed, IMO. In conjunction with explicitly adding them to |
Thanks for digging into this and for identifying some other points to work on. |
So WDYT about improving packaging along the way? |
I'm not particularly involved here at this point so I won't promise to provide reviews or such, but I am quite used to ending up multiple layers deep in fixing stuff to get PRs through. I would personally suggest doing it as separate PRs though and not putting it all in this one. tl;dr, sounds great! |
Yeah, I didn't mean to imply that it'd be included in this PR. I'm pretty much fanatic about making PRs and commits atomics. |
UPD: this has recently been fixed by making use of the official codecov action. |
This patch helps make sure that sdist contents is enough to run tests downstream.
81ec287
to
3d9e30f
Compare
Codecov Report
Additional details and impacted files@@ Coverage Diff @@
## master #2541 +/- ##
==========================================
+ Coverage 98.10% 98.14% +0.04%
==========================================
Files 118 118
Lines 16471 16471
Branches 2977 2977
==========================================
+ Hits 16159 16166 +7
+ Misses 255 249 -6
+ Partials 57 56 -1 |
I think maybe the formatting check can/should be excluded from this transition, and have that be tested against the Git content. I don't see any reason for downstream users to run formatting checks, and saves on having to bundle a couple files - including |
I am curious about the status of this change, it sounds like it would be useful from the points you mentioned. |
@jakkdl there's school of thought preaching that sdists should be as close to Git source as possible. And I also prefer this way — it's easier to maintain, for one. In my projects, @CoolCat467 it fell off my radar. I thought it was kinda ready but couldn't get enough people on board to get approvals. Maybe, I'll resurrect it some time... |
This patch helps make sure that sdist contents is enough to run tests downstream.