Skip to content

Commit a0ccee5

Browse files
apartsinclaude
andcommitted
Sync docs/index.md with README.md
Update GitHub Pages landing page to match current README content: badges, examples, test count (640), and section structure. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
1 parent 566366e commit a0ccee5

1 file changed

Lines changed: 11 additions & 24 deletions

File tree

docs/index.md

Lines changed: 11 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -7,18 +7,16 @@ title: ModelMesh Lite
77
<img src="assets/banner.png?v=2" alt="ModelMesh" width="100%">
88
</p>
99

10-
<h1 align="center">ModelMesh Lite</h1>
11-
1210
<p align="center">
1311
<strong>One integration point for all your AI providers.</strong><br>
1412
Automatic failover, free-tier aggregation, and capability-based routing.
1513
</p>
1614

1715
<p align="center">
18-
<a href="https://pypi.org/project/modelmesh-lite/"><img src="https://img.shields.io/pypi/v/modelmesh-lite?color=blue" alt="PyPI"></a>
19-
<a href="https://pypi.org/project/modelmesh-lite/"><img src="https://img.shields.io/pypi/pyversions/modelmesh-lite" alt="Python"></a>
16+
<img src="https://img.shields.io/badge/python-3.11%2B-blue" alt="Python 3.11+">
2017
<a href="https://github.com/ApartsinProjects/ModelMesh/blob/master/LICENSE"><img src="https://img.shields.io/badge/license-MIT-green" alt="License"></a>
21-
<a href="https://github.com/ApartsinProjects/ModelMesh/actions"><img src="https://img.shields.io/badge/tests-356%20passed-brightgreen" alt="Tests"></a>
18+
<a href="https://github.com/ApartsinProjects/ModelMesh/actions"><img src="https://img.shields.io/badge/tests-640%20passed-brightgreen" alt="Tests"></a>
19+
<a href="https://apartsinprojects.github.io/ModelMesh/"><img src="https://img.shields.io/badge/docs-GitHub%20Pages-blue" alt="Documentation"></a>
2220
</p>
2321

2422
---
@@ -44,16 +42,8 @@ export OPENAI_API_KEY="sk-..."
4442
```python
4543
import modelmesh
4644

47-
# Create an OpenAI-compatible client with automatic provider routing
4845
client = modelmesh.create("chat-completion")
4946

50-
# See what's behind the virtual model
51-
print(client.describe())
52-
# Pool "chat-completion" (strategy: stick-until-failure)
53-
# capability: generation.text-generation.chat-completion
54-
# → openai.gpt-4o [openai.llm.v1] (active)
55-
# openai.gpt-4o-mini [openai.llm.v1] (active)
56-
5747
response = client.chat.completions.create(
5848
model="chat-completion", # virtual model name = capability pool
5949
messages=[{"role": "user", "content": "Hello!"}],
@@ -77,7 +67,7 @@ const response = await client.chat.completions.create({
7767
console.log(response.choices[0].message.content);
7868
```
7969

80-
## What Happens Under the Hood
70+
## How It Works
8171

8272
```
8373
client.chat.completions.create(model="chat-completion", ...)
@@ -93,7 +83,7 @@ client.chat.completions.create(model="chat-completion", ...)
9383

9484
**`"chat-completion"`** resolves to a pool containing all models that support chat. The pool's **rotation policy** picks the best active model. If it fails, the router retries with backoff, then rotates to the next model. When a provider's free quota runs out, rotation automatically moves to the next provider.
9585

96-
## Multi-Provider Example
86+
## Multi-Provider Failover
9787

9888
Add more API keys -- ModelMesh chains them automatically:
9989

@@ -104,26 +94,19 @@ export GOOGLE_API_KEY="AI..."
10494
```
10595

10696
```python
107-
import modelmesh
108-
109-
# All detected providers join the pool -- failover is automatic
11097
client = modelmesh.create("chat-completion")
11198

99+
# Inspect the providers behind the virtual model
112100
print(client.describe())
113101
# Pool "chat-completion" (strategy: stick-until-failure)
114102
# capability: generation.text-generation.chat-completion
115103
# → openai.gpt-4o [openai.llm.v1] (active)
116104
# openai.gpt-4o-mini [openai.llm.v1] (active)
117105
# anthropic.claude-sonnet-4 [anthropic.claude.v1] (active)
118106
# google.gemini-2.0-flash [google.gemini.v1] (active)
119-
120-
response = client.chat.completions.create(
121-
model="chat-completion",
122-
messages=[{"role": "user", "content": "Explain quantum computing briefly."}],
123-
)
124107
```
125108

126-
OpenAI, Anthropic, and Gemini models are now in the same pool. If OpenAI is down or its quota is exhausted, the request routes to Anthropic, then Gemini.
109+
Same `client.chat.completions.create()` call -- but now if OpenAI is down or its quota is exhausted, the request routes to Anthropic, then Gemini.
127110

128111
## YAML Configuration
129112

@@ -221,3 +204,7 @@ cd src/python && python -m pytest ../../tests/ -v
221204
## License
222205

223206
[MIT](https://github.com/ApartsinProjects/ModelMesh/blob/master/LICENSE)
207+
208+
---
209+
210+
<sub>Created by [Sasha Apartsin](https://www.apartsin.com)</sub>

0 commit comments

Comments
 (0)