Skip to content

Commit 732cb6a

Browse files
unamedkrclaude
andcommitted
Add one-command quickstart and contributing guide
- scripts/quickstart.sh: auto downloads model, builds, converts, runs - CONTRIBUTING.md: setup guide, code standards, module ownership - README quick start simplified to single bash command Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1 parent 166a16d commit 732cb6a

4 files changed

Lines changed: 157 additions & 10 deletions

File tree

CONTRIBUTING.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Contributing to TurboQuant.cpp
2+
3+
Thank you for your interest in contributing! Here's how to get started.
4+
5+
## Quick Setup
6+
7+
```bash
8+
git clone https://github.com/quantumaikr/TurboQuant.cpp
9+
cd TurboQuant.cpp
10+
cmake -B build -DCMAKE_BUILD_TYPE=Debug -DTQ_BUILD_TESTS=ON
11+
cmake --build build -j$(nproc 2>/dev/null || sysctl -n hw.ncpu)
12+
ctest --test-dir build --output-on-failure
13+
```
14+
15+
## What to Work On
16+
17+
Check [Issues](https://github.com/quantumaikr/TurboQuant.cpp/issues) for tasks labeled `good first issue` or `help wanted`.
18+
19+
**High-impact areas:**
20+
- New model architectures (Llama, Phi, Gemma)
21+
- AVX2/AVX-512 SIMD kernels for x86
22+
- Metal GPU compute shaders
23+
- Long context benchmarks (8K, 32K, 128K tokens)
24+
25+
## Code Standards
26+
27+
- **C11** for core library (`src/`), **C++17** for tests
28+
- No external dependencies in core (libc/libm/pthread only)
29+
- Every public function needs a test
30+
- Run tests before submitting: `ctest --test-dir build`
31+
32+
## Module Ownership
33+
34+
Each module has exclusive files to prevent merge conflicts:
35+
36+
| Module | Files |
37+
|--------|-------|
38+
| `polar` | `src/core/tq_polar.*`, `tests/test_polar.*` |
39+
| `qjl` | `src/core/tq_qjl.*`, `tests/test_qjl.*` |
40+
| `turbo` | `src/core/tq_turbo.*`, `tests/test_turbo.*` |
41+
| `engine` | `src/engine/*` |
42+
| `cache` | `src/cache/*` |
43+
| `simd` | `src/backend/cpu/*` |
44+
45+
## Pull Request Process
46+
47+
1. Fork and create a feature branch
48+
2. Make your changes
49+
3. Ensure all tests pass and no new warnings
50+
4. Submit a PR with a clear description
51+
52+
## Cross-Platform Checklist
53+
54+
Before submitting, verify:
55+
- [ ] NEON intrinsics are inside `#ifdef __ARM_NEON` guards
56+
- [ ] No GCC warnings (`-Wall -Wextra -Wpedantic`)
57+
- [ ] Scalar fallback exists for all SIMD code paths
58+
59+
## License
60+
61+
By contributing, you agree that your contributions will be licensed under the Apache 2.0 License.

README.ko.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,18 +28,25 @@ Qwen3.5-0.8B, Q4_0, CPU 전용, Apple Silicon M-series
2828

2929
---
3030

31-
## 30초 빠른 시작
31+
## 빠른 시작
3232

3333
```bash
3434
git clone https://github.com/quantumaikr/TurboQuant.cpp && cd TurboQuant.cpp
35-
cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j$(nproc)
35+
bash scripts/quickstart.sh "What is deep learning?"
36+
```
37+
38+
이것만으로 끝입니다. 스크립트가 엔진 빌드, [Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B) 다운로드 (~1.5 GB), TQM 변환, 추론까지 자동 수행합니다.
3639

37-
# 모델 변환 (1회, HuggingFace 캐시 자동 감지)
38-
./build/tq_convert
40+
<details>
41+
<summary>수동 설정 (단계별 진행 시)</summary>
3942

40-
# 실행
43+
```bash
44+
cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j$(nproc)
45+
pip3 install huggingface_hub && python3 -c "from huggingface_hub import snapshot_download; snapshot_download('Qwen/Qwen3.5-0.8B')"
46+
./build/tq_convert -o model.tqm
4147
./build/tq_run model.tqm -p "What is deep learning?" -j 4
4248
```
49+
</details>
4350

4451
```
4552
Prompt: What is deep learning?

README.md

Lines changed: 12 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -28,18 +28,25 @@ Same model, same quantization, same hardware. Apples-to-apples.
2828

2929
---
3030

31-
## 30-Second Quick Start
31+
## Quick Start
3232

3333
```bash
3434
git clone https://github.com/quantumaikr/TurboQuant.cpp && cd TurboQuant.cpp
35-
cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j$(nproc)
35+
bash scripts/quickstart.sh "What is deep learning?"
36+
```
37+
38+
That's it. The script builds the engine, downloads [Qwen3.5-0.8B](https://huggingface.co/Qwen/Qwen3.5-0.8B) (~1.5 GB), converts it to TQM, and runs inference.
3639

37-
# Convert model (one-time, auto-detects from HuggingFace cache)
38-
./build/tq_convert
40+
<details>
41+
<summary>Manual setup (if you prefer step by step)</summary>
3942

40-
# Run
43+
```bash
44+
cmake -B build -DCMAKE_BUILD_TYPE=Release && cmake --build build -j$(nproc)
45+
pip3 install huggingface_hub && python3 -c "from huggingface_hub import snapshot_download; snapshot_download('Qwen/Qwen3.5-0.8B')"
46+
./build/tq_convert -o model.tqm
4147
./build/tq_run model.tqm -p "What is deep learning?" -j 4
4248
```
49+
</details>
4350

4451
```
4552
Prompt: What is deep learning?

scripts/quickstart.sh

Lines changed: 72 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,72 @@
1+
#!/bin/bash
2+
# TurboQuant.cpp — One-command quickstart
3+
# Downloads Qwen3.5-0.8B, builds the engine, converts the model, and runs inference.
4+
#
5+
# Usage:
6+
# bash scripts/quickstart.sh
7+
# bash scripts/quickstart.sh "Your prompt here"
8+
9+
set -e
10+
11+
PROMPT="${1:-What is deep learning?}"
12+
THREADS="${2:-4}"
13+
ROOT="$(cd "$(dirname "$0")/.." && pwd)"
14+
cd "$ROOT"
15+
16+
echo "=== TurboQuant.cpp Quickstart ==="
17+
echo ""
18+
19+
# Step 1: Build
20+
if [ ! -f build/tq_run ]; then
21+
echo "[1/4] Building..."
22+
cmake -B build -DCMAKE_BUILD_TYPE=Release -DTQ_BUILD_TESTS=OFF -DTQ_BUILD_BENCH=OFF -Wno-dev 2>/dev/null
23+
cmake --build build -j"$(nproc 2>/dev/null || sysctl -n hw.ncpu)" --target tq_run --target tq_convert 2>&1 | tail -3
24+
echo " Done."
25+
else
26+
echo "[1/4] Build found."
27+
fi
28+
29+
# Step 2: Download model if not cached
30+
MODEL_DIR="$HOME/.cache/huggingface/hub/models--Qwen--Qwen3.5-0.8B"
31+
if [ ! -d "$MODEL_DIR" ]; then
32+
echo "[2/4] Downloading Qwen3.5-0.8B (~1.5 GB)..."
33+
if command -v huggingface-cli &>/dev/null; then
34+
huggingface-cli download Qwen/Qwen3.5-0.8B --local-dir-use-symlinks True
35+
elif command -v python3 &>/dev/null; then
36+
python3 -c "
37+
from huggingface_hub import snapshot_download
38+
snapshot_download('Qwen/Qwen3.5-0.8B')
39+
print('Download complete.')
40+
" 2>/dev/null || {
41+
echo " Installing huggingface_hub..."
42+
pip3 install --quiet huggingface_hub 2>/dev/null || pip3 install --quiet --break-system-packages huggingface_hub 2>/dev/null
43+
python3 -c "
44+
from huggingface_hub import snapshot_download
45+
snapshot_download('Qwen/Qwen3.5-0.8B')
46+
print('Download complete.')
47+
"
48+
}
49+
else
50+
echo "Error: python3 or huggingface-cli required for model download."
51+
echo " Install: pip3 install huggingface_hub"
52+
echo " Or download manually: https://huggingface.co/Qwen/Qwen3.5-0.8B"
53+
exit 1
54+
fi
55+
echo " Done."
56+
else
57+
echo "[2/4] Model found in cache."
58+
fi
59+
60+
# Step 3: Convert to TQM
61+
if [ ! -f model.tqm ]; then
62+
echo "[3/4] Converting to TQM format..."
63+
./build/tq_convert -o model.tqm -j "$THREADS"
64+
echo " Done."
65+
else
66+
echo "[3/4] model.tqm found."
67+
fi
68+
69+
# Step 4: Run inference
70+
echo "[4/4] Running inference..."
71+
echo ""
72+
./build/tq_run model.tqm -p "$PROMPT" -j "$THREADS" -n 100

0 commit comments

Comments
 (0)