Add missing dependency on fixtures to help parallel testing#1303
Add missing dependency on fixtures to help parallel testing#1303
Conversation
There was a problem hiding this comment.
Thank you for taking a look at this issue. The fix is good, but this seems to break consistency in the codebase. We specify the fixture to each test in some places, while this introduces autouse in a few places.
- Option 1--Use autouse everywhere: If we use
autouse, shouldn't we update all the existing use of the fixtures? - Option 2--Use autouse under some conditions: Or is there a criterion whether to use
autouseor not (such as we would useautouseonly when all tests in the class require the fixture)? - Option 3--Manually specify fixtures one by one: If we do not touch existing ones, I think, for consistency, we should specify the fixture in the arguments of each test requiring the fixture.
I currently don't have a strong opinion about which approach to choose, but the change should at least be consistent under one rule. I'm interested in whether other maintainers @scop and @yedayak have some thoughts.
|
I'm not sure how this helps with #430. aren't fixtures supposed to always run before the tests that rely on them? I don't see anything in the documentation saying this affects ordering. How did you test this fix? |
|
@yedayak wrote:
The documentation you pointed to says that: In this example, the append_first fixture is an autouse fixture. Because it happens automatically, both tests are affected by it, even though neither test requested it. That doesn’t mean they can’t be requested though; just that it isn’t necessary. That's exactly what's going on here... None of the tests I fixed requested the fixture, but they depend on it.
I ran the test with xdist and Before this patch, these tests fail with messages similar to: After this patch, all these tests, which previously failed, pass. I'll push a new version of the patch with these instructions in the commit message (as well as with the changes requested by @akinomyoga |
|
@akinomyoga wrote:
Indeed, thanks for pointing it out.
Since I was just hoping to fix the issues, I just submitted a new version of the patch that implements option 3, thus removing the inconsistency in the codebase.
I also don't have a strong opinion. Anyhow, I probably prefer option 1, since it is easier to maintain. I can volunteer to change it everywhere in the codebase if that's what you, maintainers, prefer. Please consider integrating my current change anyway, since it already helps with #430. Thanks for you reviews, @yedayak and @akinomyoga. |
Ah, yeah. Somehow, no one had described the issue. The issue is that some of our test cases don't specify the dependencies properly even though they actually depend on the fixture. I think it is clearer if you look at the new version of the PR changes (with Option 3): 8f0ccec. So I think #430 isn't the problem of the framework, but it's just bugs of specific test cases missing the necessary dependency specification.
Thank you. Yeah, that would be the minimal modifications that are consistent.
If the discussion would turn out to take time, we can first merge the PR with option 3 as an easier fix, and then continue the discussion. |
akinomyoga
left a comment
There was a problem hiding this comment.
CI complains about the codes in the commit message. Two lines don't fit within 80 columns. For the BASH_COMPLETION_TEST_BASH_COMPLETION=, you could replace the path with /path/to/bash_completion or something. For the line of the error message, you can refold the line in column 80.
Commit message updated in 1fc9d78 |
|
New commit, 8c3efb6, to fix commit message title error flagged by the CI. |
|
I'm getting this from the CI: Is this a known problem? It doesn't look like it's related to my changes. |
Yeah this isn't an issue with this PR, it seems something else broke, maybe in the ubuntu-latest github actions image. I checked and rerunning a previously successful run fails now |
|
Is there anything I could do about the failing test? |
I created #1307 that seems to fix the issue, but I don't think this should stop this PR from merging. |
|
Can you rebase on main? pre-commit should pass now |
When running the test suite with xdist and 'pytest -n <jobs>', several
tests fail. This happens because, in these tests, the definition of the
tested command, usually in a fixture method called 'functions' might
happen only after the execution of the tests themselves, i.e. in the
methods whose names start with 'test_'.
This patch adds the missing dependency on these test-command-defining
fixtures, so that they are executed before the tests themselves.
Steps to reproduce:
1. Make sure pytest-xdist is installed. On Debian systems this can be
verified with the following command:
dpkg --list python3-pytest-xdist
2. Build and install bash-completion locally, for example with the
following commands:
autoreconf -f -i
./configure --prefix=$PWD/install/
make install
3. Run the test suite with a few parallel jobs, such as with:
export MYPATH=$PWD/install/share/bash-completion/bash_completion
BASH_COMPLETION_TEST_BASH_COMPLETION=$MYPATH \
pytest \
-n 8 \
test/t/unit/test_unit_count_args.py \
test/t/unit/test_unit_dequote.py \
test/t/unit/test_unit_get_first_arg.py \
test/t/unit/test_unit_quote.py
unset MYPATH
Before this patch, these tests fail with messages similar to:
FAILED test/t/unit/test_unit_quote.py::TestUnitQuote::test_3 -
AssertionError: Error running "__tester " a "": exit status=127, output="
After this patch, all these tests, which previously failed, pass.
This might help with #430...
In my case, test/t/unit/test_unit_command_offset.py's test_cmd_quoted is the only one still failing
Original commit message follows:
-- 8< --
When running the test suite with xdist and 'pytest -n ', several tests fail. This happens because, in these tests, the definition of the testing function, usually in a method called 'functions' might happen after the execution of the tests themselves, i.e. in the methods whose names start with 'test_'.
This patch adds 'autouse=True' to these test-function-defining methods, so that they are executed before the tests.