Skip to content

Refactor TestCase subclasses to directly inherit from unittest.TestCase#883

Open
acosmicflamingo wants to merge 10 commits into
NVIDIA:mainfrom
acosmicflamingo:unittest-pytest-migration-2
Open

Refactor TestCase subclasses to directly inherit from unittest.TestCase#883
acosmicflamingo wants to merge 10 commits into
NVIDIA:mainfrom
acosmicflamingo:unittest-pytest-migration-2

Conversation

@acosmicflamingo
Copy link
Copy Markdown

A cleaner approach to migrating numba-cuda's test suite to pytest is to decouple TestCase methods from the class. Doing so will make removing the TestCase inheritance a trivial matter, and thus pave the way to using pytest. Many instances (no pun intended) where self is being used is just to access functions (like assertEqual) that can simply be replaced with an assert statement. Of course, there are some other functions that do require a bit of care (and I'm regularly running the test suite to make sure I'm not introducing any regressions).

Once this is done, the rest of the migration is pretty much a matter of utilizing pytest API in appropriate places (fingers crossed).

@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 10, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

Comment thread numba_cuda/numba/cuda/tests/support.py
Comment thread numba_cuda/numba/cuda/tests/support.py Outdated
Comment thread numba_cuda/numba/cuda/tests/support.py Outdated
@gmarkall
Copy link
Copy Markdown
Contributor

/ok to test 3eeb4ae

@gmarkall
Copy link
Copy Markdown
Contributor

Thanks for the PR! I've had a quick skim, and started the tests to see how it goes.

@acosmicflamingo
Copy link
Copy Markdown
Author

acosmicflamingo commented May 11, 2026

@gmarkall no problem! Thanks for the feedback; I'll implement suggestions in a bit.

Whoops, forgot to mark this PR as a draft; would not want you to have to review the PR for the nth time because you thought I had finally pushed that last commit (or maybe you'd prefer it be in its current state and you periodically check to see how it's going?).

By the way, what is the status of upstream numba's cuda-specific test suite, and what's the relationship between this codebase and theirs? I was finding that some functions from support.py marked for deletion reside in upstream numba's codebase. I was curious as to whether some of the work in this PR should eventually make its way upstream, or whether it's not really supported now that numba-cuda is in the picture.

@acosmicflamingo
Copy link
Copy Markdown
Author

/ok to test 65d755b

@gmarkall
Copy link
Copy Markdown
Contributor

would not want you to have to review the PR for the nth time because you thought I had finally pushed that last commit (or maybe you'd prefer it be in its current state and you periodically check to see how it's going?).

I'm quite happy if we just iterate a few times, as long as you're OK with that. I find it hard to do "one perfect review" that outlines perfectly the exact set of suggestions I want to make anyway.

By the way, what is the status of upstream numba's cuda-specific test suite, and what's the relationship between this codebase and theirs? I was finding that some functions from support.py marked for deletion reside in upstream numba's codebase. I was curious as to whether some of the work in this PR should eventually make its way upstream, or whether it's not really supported now that numba-cuda is in the picture.

numba.cuda in upstream is in maintenance mode for backward compatibility - that is, code written to use it should still work with the latest version of numba installed, but Numba-CUDA is where all new development has happened since we separated the CUDA target from upstream Numba. So things will stay quite static upstream, whilst evolution of the code in this repo can proceed independently.

There's a note about it in the Numba documentation (https://numba.readthedocs.io/en/stable/cuda/overview.html#built-in-cuda-target-deprecation-and-maintenance-status), but I realise now we don't explicitly spell it out in the Numba-CUDA docs.

Comment thread numba_cuda/numba/cuda/tests/support.py Outdated
Comment thread numba_cuda/numba/cuda/tests/support.py
@gmarkall
Copy link
Copy Markdown
Contributor

/ok to test 65d755b

@gmarkall
Copy link
Copy Markdown
Contributor

/ok to test 65d755b

Thanks for trying to start CI - unfortunately, only NVIDIA staff can run CI. Please ping me if you're waiting to run it to get feedback from the CI test suite though.

@acosmicflamingo
Copy link
Copy Markdown
Author

acosmicflamingo commented May 11, 2026

would not want you to have to review the PR for the nth time because you thought I had finally pushed that last commit (or maybe you'd prefer it be in its current state and you periodically check to see how it's going?).

I'm quite happy if we just iterate a few times, as long as you're OK with that. I find it hard to do "one perfect review" that outlines perfectly the exact set of suggestions I want to make anyway.

By the way, what is the status of upstream numba's cuda-specific test suite, and what's the relationship between this codebase and theirs? I was finding that some functions from support.py marked for deletion reside in upstream numba's codebase. I was curious as to whether some of the work in this PR should eventually make its way upstream, or whether it's not really supported now that numba-cuda is in the picture.

numba.cuda in upstream is in maintenance mode for backward compatibility - that is, code written to use it should still work with the latest version of numba installed, but Numba-CUDA is where all new development has happened since we separated the CUDA target from upstream Numba. So things will stay quite static upstream, whilst evolution of the code in this repo can proceed independently.

There's a note about it in the Numba documentation (https://numba.readthedocs.io/en/stable/cuda/overview.html#built-in-cuda-target-deprecation-and-maintenance-status), but I realise now we don't explicitly spell it out in the Numba-CUDA docs.

Got it! I'm okay with incremental reviews; was actually thinking it's the better approach after seeing that just this PR alone has 22 files that have been modified and I have a loooong way to go.

Thank you for the background information and pointing me to the documentation; that really helps! I wonder whether I should add a mention of it in this separate low-priority PR I had made tweaking CONTRIBUTING.md.

@gmarkall
Copy link
Copy Markdown
Contributor

I have a loooong way to go.

Maybe...? It looks most of the way there to me, I was expecting we'd just be doing a few tweaks (assuming that fixing CI is not too onerous). Let me know if you need any advice on reproducing the failures locally.

I wonder whether I should add a mention of it in this separate low-priority #881 I had made tweaking CONTRIBUTING.md.

Thank you for the kind offer - I think that there are a few directional changes that need to be made to our guidance on Numba vs Numba-CUDA soon that will make it hard to align with an external contribution - could we please keep the other PR as-is for now?

@acosmicflamingo
Copy link
Copy Markdown
Author

I have a loooong way to go.

Maybe...? It looks most of the way there to me, I was expecting we'd just be doing a few tweaks (assuming that fixing CI is not too onerous). Let me know if you need any advice on reproducing the failures locally.

I see. This PR is about decoupling unittest.TestCase from the test suite, and removing TestCase (a unittest.TestCase subclass) will break all tests that use unittest API (e.g. self.assertEqual, etc.) through inheritance. This top-down approach of removing functionality like that was what I was doing here, but felt like it was a lot messier than just taking the bottom-up approach of moving functionality out of the class, where removing the base class inheritance would just leave me with errors about missing unittest API. The approach I'm taking also allows me to run pytest on the suite to make sure that I haven't broken anything along the way :D

I wonder whether I should add a mention of it in this separate low-priority #881 I had made tweaking CONTRIBUTING.md.

Thank you for the kind offer - I think that there are a few directional changes that need to be made to our guidance on Numba vs Numba-CUDA soon that will make it hard to align with an external contribution - could we please keep the other PR as-is for now?

Yes that's fine with me!

@acosmicflamingo
Copy link
Copy Markdown
Author

/ok to test 65d755b

Thanks for trying to start CI - unfortunately, only NVIDIA staff can run CI. Please ping me if you're waiting to run it to get feedback from the CI test suite though.

Ha! That explains the disobedient bot ;) I'm going to try and reproduce the issue by creating a conda environment, since the issue seems to stem from there. I don't want to ping you every 4 minutes to try and fix that particular issue; I will save your patience for another time.

@acosmicflamingo acosmicflamingo changed the title Decouple unittest.TestCase dependency from test suite Refactor TestCase subclasses to directly inherit from unittest.TestCase May 11, 2026
@acosmicflamingo
Copy link
Copy Markdown
Author

@gmarkall never used pixi before; pretty neat package manager! After tweaking the subprocess command to address the cuda.tests not being found, I'm hitting another issue, a false positive that only appears when tests are executed in parallel (pixi run -e cu-13-0-py313 pytest $PIXI_PROJECT_ROOT/testing -n auto --dist loadscope --loadscope-reorder -v) and not sequentially (pytest $PIXI_PROJECT_ROOT/testing -n auto --dist loadscope --loadscope-reorder -v):

assert status.stderr == "", status.stderr
                   ^^^^^^^^^^^^^^^^^^^
E           AssertionError: .../pytest_benchmark/logger.py:44: PytestBenchmarkWarning: Benchmarks are automatically disabled because xdist plugin is active.Benchmarks cannot be performed reliably in a parallelized environment.
E             warner(PytestBenchmarkWarning(text))

numba_cuda/numba/cuda/tests/support.py:416: AssertionError

The original code would assert that "OK" was present in stderr, which I figured was just a way to ensure the test really did pass:

if no_tests_ran in status.stderr:
    self.skipTest(no_tests_ran)
else:
    self.assertIn("OK", status.stderr)
return status

After switching to pytest instead of numba.runtests, the assumption I made was that status.stderr would always be empty in a successful run, but I forgot about warnings that could still end up appearing in the error stream.

Do you have a favorite out of the potential solutions below?

### Option A  ###
if no_tests_ran in status.stderr:
    self.skipTest(no_tests_ran)
-else:
-    self.assertIn("OK", status.stderr)
return status



### Option B  ###
+_IGNORED_STDERR_PATTERNS = [
+    "PytestBenchmarkWarning",
+    "Benchmarks are automatically disabled",
+]
+
+def _unexpected_stderr_found(stderr):
+    lines = stderr.splitlines()
+    return any(
+        line and not any(pat in line for pat in _IGNORED_STDERR_PATTERNS)
+        for line in lines
+    )
+
no_tests_ran = "NO TESTS RAN"
if no_tests_ran in status.stderr:
    pytest.skip(no_tests_ran)
+elif _unexpected_stderr_found(status.stderr):
+    pytest.fail(f"Unexpected stderr output:\n{status.stderr}")
return status

### Option C ###
???

@acosmicflamingo acosmicflamingo force-pushed the unittest-pytest-migration-2 branch from fc6cddf to 39e4aea Compare May 12, 2026 17:04
This is the first of many commits that will be migrating the
test suite to using pytest. Since numba.runtests uses unittest,
switching to pytest when running subprocesses seems like a good
starting point.
Given that many of the test modules are TestCase subclasses,
migrating methods to functions so they can be imported elsewhere
seems like a good path forward to the pytest migration endeavor.
Pluck assertPreciseEqual and assertStridesEqual methods out of
TestCase class (along with many internal helper functions) so
they can be used via importing
Although the make_dummy_type method was moved, it would be great to
figure out how to write the function in a way where deterministic
ID can be generated without access to an instance.
Moving the remaining function 'random' outside of TestCase class
while maintaining its caching behavior meant making it a pytest
fixture. However, this will only work if TestFlowControl class is
no longer a unittest.TestCase subclass. This commit does so, and
replaces unittest API usage with assert statements.
Incorporating Graham Markall's suggestions; archaic functions have
been removed, and unique test ids are now generated using pytest
fixtures. Test IDs have a class scope and monotonically increase
whenever pytest calls test functions.

Create numba.cuda.tests.test_support module to house tests for
numba.cuda.tests.support (used to test test_id generation
behavior)

Migrate test_overload module to exclusively using pytest
All tests previously inheriting from TestCase class now inherit
directly from unittest.TestCase class instead.
Generating monotonically increasing number for test_id only works
when tests are run sequentially. To support running in parallel,
let's just use pytest's request.node.nodeid attribute to get the
same behavior we had before using unittest.TestCase.id (e.g.
cudapy/test_foo.py::TestFoo::test_foo1)
@acosmicflamingo acosmicflamingo force-pushed the unittest-pytest-migration-2 branch from 83f36a0 to 33ad257 Compare May 14, 2026 01:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants