Skip to content

Commit 056d446

Browse files
authored
Add back README to the Python package (#222)
1 parent 97f8a9e commit 056d446

2 files changed

Lines changed: 57 additions & 1 deletion

File tree

README.md

Lines changed: 7 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -23,9 +23,15 @@ packages in that they are made to be:
2323
the different PyTorch build configurations (various CUDA versions
2424
and C++ ABIs). Furthermore, older C library versions must be supported.
2525

26+
## Components
27+
28+
- You can load kernels from the Hub using the [`kernels`](kernels/) Python package.
29+
- If you are a kernel author, you can build your kernels with [kernel-builder](builder/).
30+
- Hugging Face maintains a set of kernels in [kernels-community](https://huggingface.co/kernels-community).
31+
2632
## 🚀 Quick Start
2733

28-
Install the `kernels` package with `pip` (requires `torch>=2.5` and CUDA):
34+
Install the `kernels` Python package with `pip` (requires `torch>=2.5` and CUDA):
2935

3036
```bash
3137
pip install kernels

kernels/README.md

Lines changed: 50 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,50 @@
1+
# kernels
2+
3+
The Kernel Hub allows Python libraries and applications to load compute
4+
kernels directly from the [Hub](https://hf.co/). To support this kind
5+
of dynamic loading, Hub kernels differ from traditional Python kernel
6+
packages in that they are made to be:
7+
8+
- Portable: a kernel can be loaded from paths outside `PYTHONPATH`.
9+
- Unique: multiple versions of the same kernel can be loaded in the
10+
same Python process.
11+
- Compatible: kernels must support all recent versions of Python and
12+
the different PyTorch build configurations (various CUDA versions
13+
and C++ ABIs). Furthermore, older C library versions must be supported.
14+
15+
The `kernels` Python package is used to load kernels from the Hub.
16+
17+
## 🚀 Quick Start
18+
19+
Install the `kernels` package with `pip` (requires `torch>=2.5` and CUDA):
20+
21+
```bash
22+
pip install kernels
23+
```
24+
25+
Here is how you would use the [activation](https://huggingface.co/kernels-community/activation) kernels from the Hugging Face Hub:
26+
27+
```python
28+
import torch
29+
30+
from kernels import get_kernel
31+
32+
# Download optimized kernels from the Hugging Face hub
33+
activation = get_kernel("kernels-community/activation")
34+
35+
# Random tensor
36+
x = torch.randn((10, 10), dtype=torch.float16, device="cuda")
37+
38+
# Run the kernel
39+
y = torch.empty_like(x)
40+
activation.gelu_fast(y, x)
41+
42+
print(y)
43+
```
44+
45+
You can [search for kernels](https://huggingface.co/models?other=kernels) on
46+
the Hub.
47+
48+
## 📚 Documentation
49+
50+
Read the [documentation of kernels](https://huggingface.co/docs/kernels/).

0 commit comments

Comments
 (0)