You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Update GitHub Pages landing page to match current README content:
badges, examples, test count (640), and section structure.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
**`"chat-completion"`** resolves to a pool containing all models that support chat. The pool's **rotation policy** picks the best active model. If it fails, the router retries with backoff, then rotates to the next model. When a provider's free quota runs out, rotation automatically moves to the next provider.
95
85
96
-
## Multi-Provider Example
86
+
## Multi-Provider Failover
97
87
98
88
Add more API keys -- ModelMesh chains them automatically:
OpenAI, Anthropic, and Gemini models are now in the same pool. If OpenAI is down or its quota is exhausted, the request routes to Anthropic, then Gemini.
109
+
Same `client.chat.completions.create()` call -- but now if OpenAI is down or its quota is exhausted, the request routes to Anthropic, then Gemini.
0 commit comments