Refactor TestCase subclasses to directly inherit from unittest.TestCase#883
Refactor TestCase subclasses to directly inherit from unittest.TestCase#883acosmicflamingo wants to merge 10 commits into
Conversation
|
/ok to test 3eeb4ae |
|
Thanks for the PR! I've had a quick skim, and started the tests to see how it goes. |
|
@gmarkall no problem! Thanks for the feedback; I'll implement suggestions in a bit. Whoops, forgot to mark this PR as a draft; would not want you to have to review the PR for the nth time because you thought I had finally pushed that last commit (or maybe you'd prefer it be in its current state and you periodically check to see how it's going?). By the way, what is the status of upstream numba's cuda-specific test suite, and what's the relationship between this codebase and theirs? I was finding that some functions from support.py marked for deletion reside in upstream numba's codebase. I was curious as to whether some of the work in this PR should eventually make its way upstream, or whether it's not really supported now that numba-cuda is in the picture. |
|
/ok to test 65d755b |
I'm quite happy if we just iterate a few times, as long as you're OK with that. I find it hard to do "one perfect review" that outlines perfectly the exact set of suggestions I want to make anyway.
There's a note about it in the Numba documentation (https://numba.readthedocs.io/en/stable/cuda/overview.html#built-in-cuda-target-deprecation-and-maintenance-status), but I realise now we don't explicitly spell it out in the Numba-CUDA docs. |
|
/ok to test 65d755b |
Thanks for trying to start CI - unfortunately, only NVIDIA staff can run CI. Please ping me if you're waiting to run it to get feedback from the CI test suite though. |
Got it! I'm okay with incremental reviews; was actually thinking it's the better approach after seeing that just this PR alone has 22 files that have been modified and I have a loooong way to go. Thank you for the background information and pointing me to the documentation; that really helps! I wonder whether I should add a mention of it in this separate low-priority PR I had made tweaking CONTRIBUTING.md. |
Maybe...? It looks most of the way there to me, I was expecting we'd just be doing a few tweaks (assuming that fixing CI is not too onerous). Let me know if you need any advice on reproducing the failures locally.
Thank you for the kind offer - I think that there are a few directional changes that need to be made to our guidance on Numba vs Numba-CUDA soon that will make it hard to align with an external contribution - could we please keep the other PR as-is for now? |
I see. This PR is about decoupling
Yes that's fine with me! |
Ha! That explains the disobedient bot ;) I'm going to try and reproduce the issue by creating a conda environment, since the issue seems to stem from there. I don't want to ping you every 4 minutes to try and fix that particular issue; I will save your patience for another time. |
|
@gmarkall never used assert status.stderr == "", status.stderr
^^^^^^^^^^^^^^^^^^^
E AssertionError: .../pytest_benchmark/logger.py:44: PytestBenchmarkWarning: Benchmarks are automatically disabled because xdist plugin is active.Benchmarks cannot be performed reliably in a parallelized environment.
E warner(PytestBenchmarkWarning(text))
numba_cuda/numba/cuda/tests/support.py:416: AssertionErrorThe original code would assert that "OK" was present in stderr, which I figured was just a way to ensure the test really did pass: if no_tests_ran in status.stderr:
self.skipTest(no_tests_ran)
else:
self.assertIn("OK", status.stderr)
return statusAfter switching to Do you have a favorite out of the potential solutions below? ### Option A ###
if no_tests_ran in status.stderr:
self.skipTest(no_tests_ran)
-else:
- self.assertIn("OK", status.stderr)
return status
### Option B ###
+_IGNORED_STDERR_PATTERNS = [
+ "PytestBenchmarkWarning",
+ "Benchmarks are automatically disabled",
+]
+
+def _unexpected_stderr_found(stderr):
+ lines = stderr.splitlines()
+ return any(
+ line and not any(pat in line for pat in _IGNORED_STDERR_PATTERNS)
+ for line in lines
+ )
+
no_tests_ran = "NO TESTS RAN"
if no_tests_ran in status.stderr:
pytest.skip(no_tests_ran)
+elif _unexpected_stderr_found(status.stderr):
+ pytest.fail(f"Unexpected stderr output:\n{status.stderr}")
return status
### Option C ###
??? |
fc6cddf to
39e4aea
Compare
This is the first of many commits that will be migrating the test suite to using pytest. Since numba.runtests uses unittest, switching to pytest when running subprocesses seems like a good starting point.
Given that many of the test modules are TestCase subclasses, migrating methods to functions so they can be imported elsewhere seems like a good path forward to the pytest migration endeavor.
Pluck assertPreciseEqual and assertStridesEqual methods out of TestCase class (along with many internal helper functions) so they can be used via importing
Although the make_dummy_type method was moved, it would be great to figure out how to write the function in a way where deterministic ID can be generated without access to an instance.
Moving the remaining function 'random' outside of TestCase class while maintaining its caching behavior meant making it a pytest fixture. However, this will only work if TestFlowControl class is no longer a unittest.TestCase subclass. This commit does so, and replaces unittest API usage with assert statements.
Incorporating Graham Markall's suggestions; archaic functions have been removed, and unique test ids are now generated using pytest fixtures. Test IDs have a class scope and monotonically increase whenever pytest calls test functions. Create numba.cuda.tests.test_support module to house tests for numba.cuda.tests.support (used to test test_id generation behavior) Migrate test_overload module to exclusively using pytest
All tests previously inheriting from TestCase class now inherit directly from unittest.TestCase class instead.
Generating monotonically increasing number for test_id only works when tests are run sequentially. To support running in parallel, let's just use pytest's request.node.nodeid attribute to get the same behavior we had before using unittest.TestCase.id (e.g. cudapy/test_foo.py::TestFoo::test_foo1)
83f36a0 to
33ad257
Compare
A cleaner approach to migrating numba-cuda's test suite to pytest is to decouple
TestCasemethods from the class. Doing so will make removing the TestCase inheritance a trivial matter, and thus pave the way to using pytest. Many instances (no pun intended) whereselfis being used is just to access functions (likeassertEqual) that can simply be replaced with an assert statement. Of course, there are some other functions that do require a bit of care (and I'm regularly running the test suite to make sure I'm not introducing any regressions).Once this is done, the rest of the migration is pretty much a matter of utilizing pytest API in appropriate places (fingers crossed).