You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/blog/2026-rsc-goes-nanobind.md
+14-16Lines changed: 14 additions & 16 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,7 +6,11 @@ author = "Severin Dicks"
6
6
draft = false
7
7
+++
8
8
9
-
## Why the packaging changed
9
+
# Rapids-singlecell release 0.15.0
10
+
11
+
We are proud to announce rapids-singlecell release 0.15.0 which comes with lots of new features but also changes to the installation process.
12
+
13
+
## Why the packaging changes
10
14
11
15
In earlier versions of rapids-singlecell, all GPU kernels were written as CuPy RawKernels.
12
16
These were compiled the first time you called them — in your environment, on your machine.
@@ -24,7 +28,7 @@ That worked, but it came with friction:
24
28
Starting with 0.15.0, these kernels are compiled once at build time and shipped as nanobind/CUDA C++ extension modules inside the wheel.
25
29
The result is a more conventional compiled-extension workflow: you `pip install` the package and every kernel is ready immediately.
26
30
27
-
##What changed
31
+
### Packaging changes in detail
28
32
29
33
The GPU kernels that were previously CuPy RawKernels are now nanobind C++ extensions built with `scikit-build-core` and CMake.
30
34
This gives us:
@@ -45,7 +49,7 @@ import rapids_singlecell as rsc
45
49
46
50
Your existing analysis scripts should work without modification.
47
51
48
-
## CUDA-specific wheels
52
+
###CUDA-specific wheels
49
53
50
54
Because the kernels are now compiled binaries, we need to ship one wheel per CUDA major version.
51
55
(Python wheel tags don't encode CUDA version, so we encode it in the package name — the same approach used by CuPy, PyTorch, and other CUDA-dependent packages.)
@@ -60,9 +64,9 @@ Both wheels are available for **x86_64** and **aarch64** on Linux.
60
64
If you have a Blackwell GPU (B200, GB200) and want the best out-of-the-box performance, the CUDA 13 wheel includes native binaries for Blackwell architectures.
61
65
The CUDA 12 wheel still supports Blackwell through PTX just-in-time compilation, so it will work, but the first kernel launch on Blackwell will be slightly slower while the driver JIT-compiles the PTX.
62
66
63
-
## How to install
67
+
###How to install
64
68
65
-
### Prebuilt wheel (recommended)
69
+
####Prebuilt wheel (recommended)
66
70
67
71
Pick the wheel that matches your CUDA version:
68
72
@@ -74,7 +78,7 @@ pip install rapids-singlecell-cu12 # CUDA 12
74
78
This installs rapids-singlecell with precompiled kernels, but does **not** pull in the RAPIDS stack (cupy, cuml, cudf, etc.).
75
79
If you manage those dependencies separately — for example, through conda — this is all you need.
76
80
77
-
### Prebuilt wheel with RAPIDS dependencies
81
+
####Prebuilt wheel with RAPIDS dependencies
78
82
79
83
If you want pip to also install the matching RAPIDS and CuPy packages:
80
84
@@ -87,7 +91,7 @@ Note: on the prebuilt wheels, the dependency extra is always `[rapids]`.
87
91
The CUDA version is determined by which package name you install — `rapids-singlecell-cu12` or `rapids-singlecell-cu13`.
88
92
If you're building from source instead, the extras are `[rapids-cu12]` and `[rapids-cu13]`.
@@ -126,7 +130,7 @@ For most users, upgrading is straightforward:
126
130
Run `nvidia-smi` or `nvcc --version` to confirm whether you're on CUDA 12.x or CUDA 13.x, and install the matching wheel.
127
131
If you're using conda, make sure the CUDA runtime library version in your environment matches the wheel you install — e.g., `cuda-cudart` from the `nvidia` channel should be 12.x for the cu12 wheel or 13.x for the cu13 wheel.
128
132
129
-
## What about `pip install rapids-singlecell`?
133
+
###What about `pip install rapids-singlecell`?
130
134
131
135
The plain install — `pip install rapids-singlecell`, without the `-cu12` or `-cu13` suffix — still works.
132
136
It will compile the CUDA extensions from source during installation.
@@ -196,12 +200,6 @@ Both energy distance and co-occurrence kernels gained multi-GPU support ([#545](
196
200
-**CUDA kernel error surfacing** — launch errors are now raised instead of silently continuing ([#619](https://github.com/scverse/rapids-singlecell/pull/619)).
197
201
-**RAPIDS 26.04 and Python 3.14 support** across all CI and conda environments.
198
202
199
-
## Get started
200
-
201
-
```bash
202
-
pip install rapids-singlecell-cu13 # or rapids-singlecell-cu12
203
-
```
204
-
205
203
A big thank you to everyone who tested the pre-releases and helped surface issues before this release went out.
206
204
207
205
For questions and bug reports, visit the [GitHub issue tracker](https://github.com/scverse/rapids_singlecell/issues).
0 commit comments