Skip to content

Commit 2a5225f

Browse files
committed
updates
1 parent 0e5e1d4 commit 2a5225f

File tree

4 files changed

+18
-24
lines changed

4 files changed

+18
-24
lines changed

β€ŽREADME.mdβ€Ž

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,7 @@
11
# llmspy.org
22

3+
Website: [llmspy.org](https://llmspy.org)
4+
35
This is a Next.js application generated with
46
[Create Fumadocs](https://github.com/fuma-nama/fumadocs).
57

β€Žcontent/docs/getting-started/index.mdxβ€Ž

Lines changed: 4 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -24,12 +24,11 @@ We're excited to announce **[llms.py](https://github.com/ServiceStack/llms)** -
2424
7. **Future-Proof**: Easily add new providers as they emerge
2525
8. **No Setup**: Just download and use, configure preferred LLMs in [llms.json](https://github.com/ServiceStack/llms/blob/main/llms/llms.json)
2626

27-
## Why create llms.py?
27+
**llms.py** transforms the complexity of managing multiple LLM providers into a simple, unified experience.
28+
Whether you're researching capabilities of new models, building the next breakthrough AI application,
29+
or just want reliable access to the best models available, llms.py has you covered.
2830

29-
As part of our work in developing a new OSS AI Generation platform, we needed a lightweight LLM gateway for
30-
usage within [ComfyUI](https://www.comfy.org). Unfortunately, the popular Python option **litellm** requires
31-
[60+ dependencies](https://github.com/BerriAI/litellm/blob/main/requirements.txt), which is a deal breaker
32-
in an open plugin ecosystem like ComfyUI where every dependency can break the environment.
31+
Get started today and avoid expensive cloud lock-ins with the freedom of provider-agnostic AI development! πŸŽ‰
3332

3433
## 🌐 Configurable Multi-Provider Gateway
3534

@@ -157,10 +156,6 @@ Automatic failover and cost optimization across providers.
157156

158157
---
159158

160-
**llms.py** transforms the complexity of managing multiple LLM providers into a simple, unified experience. Whether you're researching capabilities of new models, building the next breakthrough AI application, or just want reliable access to the best models available, llms.py has you covered.
161-
162-
Get started today and avoid expensive cloud lock-ins with the freedom of provider-agnostic AI development! πŸŽ‰
163-
164159
## Links
165160

166161
- πŸ“š [GitHub Repository](https://github.com/ServiceStack/llms)

β€Žcontent/docs/getting-started/installation.mdxβ€Ž

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -85,11 +85,7 @@ export MISTRAL_API_KEY="..."
8585

8686
After installation, verify it's working:
8787

88-
<ShellCommand>llms --help</ShellCommand>
89-
90-
<ShellCommand>llms --init</ShellCommand>
91-
92-
<ShellCommand>llms --list</ShellCommand>
88+
<ShellCommand>llms ls</ShellCommand>
9389

9490
## Updating
9591

β€Žcontent/docs/index.mdxβ€Ž

Lines changed: 11 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -7,16 +7,6 @@ description: Lightweight OpenAI compatible CLI and server gateway for multiple L
77

88
**llms.py** is a super lightweight CLI tool and OpenAI-compatible server that acts as a **configurable gateway** over multiple Large Language Model (LLM) providers.
99

10-
## Key Features
11-
12-
- **πŸͺΆ Ultra-Lightweight**: Single file with just one `aiohttp` dependency
13-
- **🌐 Multi-Provider Support**: OpenRouter, Ollama, OpenAI, Anthropic, Google, Grok, Groq, Qwen, and more
14-
- **🎯 Intelligent Routing**: Automatic failover between providers
15-
- **πŸ’» Web UI**: ChatGPT-like interface with dark mode
16-
- **πŸ“Š Built-in Analytics**: Track costs, tokens, and usage
17-
- **πŸ”’ Privacy First**: All data stored locally in browser
18-
- **🐳 Docker Ready**: Pre-built images available
19-
2010
## Quick Start
2111

2212
<Steps>
@@ -52,6 +42,17 @@ Access the UI at `http://localhost:8000`
5242
</Step>
5343
</Steps>
5444

45+
46+
## Key Features
47+
48+
- **πŸͺΆ Ultra-Lightweight**: Single file with just one `aiohttp` dependency
49+
- **🌐 Multi-Provider Support**: OpenRouter, Ollama, OpenAI, Anthropic, Google, Grok, Groq, Qwen, and more
50+
- **🎯 Intelligent Routing**: Automatic failover between providers
51+
- **πŸ’» Web UI**: ChatGPT-like interface with dark mode
52+
- **πŸ“Š Built-in Analytics**: Track costs, tokens, and usage
53+
- **πŸ”’ Privacy First**: All data stored locally in browser
54+
- **🐳 Docker Ready**: Pre-built images available
55+
5556
## Use Cases
5657

5758
### For Developers

0 commit comments

Comments
Β (0)