-
Notifications
You must be signed in to change notification settings - Fork 319
Remove pyspectral downloading from tests and fail if used #3260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #3260 +/- ##
==========================================
- Coverage 96.34% 96.33% -0.01%
==========================================
Files 463 463
Lines 58916 58926 +10
==========================================
+ Hits 56760 56769 +9
- Misses 2156 2157 +1
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
With very little effort of just comparing executing times of the unit test steps to a previous execution: CI on the ubuntu 3.12 false environment ran unit tests 50 seconds faster. |
Pull Request Test Coverage Report for Build 18452799198Details
💛 - Coveralls |
pnuu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure I follow completely how this works, but I certainly approve of forcing the download to be disabled.
|
For hopefully a little clarification: pytest always loads any I also defined it as a session scope so it should hopefully only run at the start of testing and not be enabled/disabled/enabled/disabled/enabled and so on for every test. I think that should be fine even with our "pytest-xdist" usage where it runs tests in parallel but I should confirm that. |
|
Did a quick check and I think this should be fine. I think it is run once per worker, but is actually executed so that's fine. |
mraspaud
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For reference, as I said on slack, I would like the rayleigh and calculator mocks to be outsourced to pyspectral, eg in a "testing" module. See also: pytroll/pyspectral#262
eb5c0c9 to
ca3e977
Compare
|
Ok I've rebased this entire branch and then merged in my branch for the AVHRR HRPT test fixes in #3330. So hopefully if this PR is approved and merged then that PR will be detected as merged as well. |
|
Ok I now also merged #3318 because that was the latest failure I saw. |
|
Is this failing because of the fake pyspectral data we use now? |
|
Probably. I will review when I have time. Permission to merge if it passes after I review what's going on and I add some tolerances? |
|
Hm two other environments passed for the same OS. I'm going to rerun the failed jobs and see what happens. I did not see this failure before my last commit (I think) so I'm a little confused. |
Yes for me |
|
@mraspaud it looks like restarting the jobs fixed the failure. If they end up being inconsistent I will work on making them more consistent. Merging... |
I noticed while working on other PRs that some of the CI tests were failing with HTTP hiccups trying to download pyspectral LUTs. Turns out this has been happening for years and is a major waste of resources even though it was nice to have the full processing being tested, it isn't "right" in my opinion. This PR:
autospecso Satpy's usage will always match upstream pyspectral.requestslibrary so tests fail if they make pyspectral download anything.AUTHORS.mdif not there already