Skip to content

Commit a7d1103

Browse files
committed
Use LiteLLM proxy in baseline inference
1 parent 2a3d4bc commit a7d1103

File tree

5 files changed

+395
-58
lines changed

5 files changed

+395
-58
lines changed

README.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -122,6 +122,18 @@ openenv validate --verbose
122122
openenv validate http://127.0.0.1:8000
123123
```
124124

125+
Run the hackathon baseline inference script with the injected LiteLLM proxy:
126+
127+
```bash
128+
API_BASE_URL=https://your-litellm-proxy/v1 \
129+
API_KEY=... \
130+
MODEL_NAME=... \
131+
uv run python inference.py
132+
```
133+
134+
`inference.py` uses the OpenAI client against `API_BASE_URL`, emits structured
135+
`[START]` / `[STEP]` / `[END]` logs, and evaluates every published task id.
136+
125137
## GitHub-only deployment to Hugging Face Spaces
126138

127139
This repo is set up so deployment can be triggered from GitHub Actions instead of

0 commit comments

Comments
 (0)