Skip to content

Commit b5c66f4

Browse files
authored
Update README.md
Adding basic template & initial info
1 parent 9b37cde commit b5c66f4

File tree

1 file changed

+34
-1
lines changed

1 file changed

+34
-1
lines changed

README.md

Lines changed: 34 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,2 +1,35 @@
11
# llama-cpp-python-wheels
2-
Pre-built wheels for llama-cpp-python across platforms and CUDA versions
2+
3+
Pre-built wheels for llama-cpp-python across platforms and CUDA versions.
4+
5+
## Available Wheels
6+
7+
| File | OS | Python | CUDA | Driver | GPU Support | Size |
8+
|------|-----|--------|------|--------|-------------|------|
9+
| [llama_cpp_python-0.3.16+cuda13.0.sm86.ampere-cp313-cp313-win_amd64.whl](../../releases) | Windows 10/11 | 3.13 | 13.0 | 580+ | RTX 30 series (Ampere, sm_86) | 61.4 MB |
10+
11+
## Installation
12+
13+
Download wheel from [Releases](../../releases) and install:
14+
```bash
15+
pip install llama_cpp_python-0.3.16+cuda13.0.sm86.ampere-cp313-cp313-win_amd64.whl
16+
```
17+
18+
## Verification
19+
```python
20+
from llama_cpp import Llama
21+
print("llama-cpp-python with CUDA support installed successfully")
22+
```
23+
24+
## Build Notes
25+
26+
Built with:
27+
- Visual Studio 2022 Build Tools
28+
- CUDA Toolkit 13.0
29+
- CMAKE_CUDA_ARCHITECTURES=86
30+
31+
## License
32+
33+
MIT
34+
35+
Wheels are built from [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) (MIT License)

0 commit comments

Comments
 (0)