Skip to content

Commit 2d2f11e

Browse files
author
q1666848408-cyber
committed
Add OfoxAI as LLM provider
OfoxAI (https://ofox.ai) is a unified API gateway that provides access to 100+ LLMs through a single endpoint. It supports the OpenAI, Anthropic, and Gemini protocols natively, so Continue users can use it via the existing openai/anthropic/gemini providers by just overriding apiBase. - Add docs/customize/model-providers/more/ofoxai.mdx - Register the page in docs/docs.json (More Providers group) - Add a row to the Hosted Services table in overview.mdx
1 parent cb27309 commit 2d2f11e

3 files changed

Lines changed: 209 additions & 0 deletions

File tree

Lines changed: 207 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,207 @@
1+
---
2+
title: "How to Configure OfoxAI with Continue"
3+
sidebarTitle: "OfoxAI"
4+
---
5+
6+
<Info>
7+
[OfoxAI](https://ofox.ai/zh) is a unified LLM API gateway that provides access to 100+ models (OpenAI, Anthropic, Google, Meta, DeepSeek, Mistral, Qwen, and more) through a single API key. It is fully compatible with the OpenAI, Anthropic, and Gemini protocols, so you can switch by only updating the base URL and key — no code changes required.
8+
</Info>
9+
10+
<Tip>
11+
Get an API key from the [OfoxAI Console](https://app.ofox.ai/auth). Full documentation lives at [docs.ofox.ai](https://docs.ofox.ai).
12+
</Tip>
13+
14+
## Base URLs
15+
16+
OfoxAI exposes three protocol-compatible endpoints. Pick the one that matches the model family you want to call:
17+
18+
| Protocol | Base URL | Use For |
19+
| :-------- | :-------------------------------- | :----------------------------------------- |
20+
| OpenAI | `https://api.ofox.ai/v1` | GPT, DeepSeek, Qwen, Llama, and most others |
21+
| Anthropic | `https://api.ofox.ai/anthropic` | Claude family |
22+
| Gemini | `https://api.ofox.ai/gemini` | Gemini family |
23+
24+
All three endpoints accept the same OfoxAI API key. China users get optimized routing via HK express by default.
25+
26+
## Configuration
27+
28+
Because OfoxAI speaks the OpenAI / Anthropic / Gemini wire protocols natively, you can configure it in Continue using the matching built-in provider and just override `apiBase`.
29+
30+
### OpenAI-compatible models (recommended default)
31+
32+
<Tabs>
33+
<Tab title="YAML">
34+
```yaml title="config.yaml"
35+
name: My Config
36+
version: 0.0.1
37+
schema: v1
38+
39+
models:
40+
- name: GPT-4o (OfoxAI)
41+
provider: openai
42+
model: gpt-4o
43+
apiBase: https://api.ofox.ai/v1
44+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
45+
roles:
46+
- chat
47+
- edit
48+
- apply
49+
```
50+
</Tab>
51+
<Tab title="JSON (Deprecated)">
52+
```json title="config.json"
53+
{
54+
"models": [
55+
{
56+
"title": "GPT-4o (OfoxAI)",
57+
"provider": "openai",
58+
"model": "gpt-4o",
59+
"apiBase": "https://api.ofox.ai/v1",
60+
"apiKey": "<YOUR_OFOXAI_API_KEY>"
61+
}
62+
]
63+
}
64+
```
65+
</Tab>
66+
</Tabs>
67+
68+
### Claude via the Anthropic protocol
69+
70+
<Tabs>
71+
<Tab title="YAML">
72+
```yaml title="config.yaml"
73+
models:
74+
- name: Claude Sonnet (OfoxAI)
75+
provider: anthropic
76+
model: claude-sonnet-4-20250514
77+
apiBase: https://api.ofox.ai/anthropic
78+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
79+
```
80+
</Tab>
81+
<Tab title="JSON (Deprecated)">
82+
```json title="config.json"
83+
{
84+
"models": [
85+
{
86+
"title": "Claude Sonnet (OfoxAI)",
87+
"provider": "anthropic",
88+
"model": "claude-sonnet-4-20250514",
89+
"apiBase": "https://api.ofox.ai/anthropic",
90+
"apiKey": "<YOUR_OFOXAI_API_KEY>"
91+
}
92+
]
93+
}
94+
```
95+
</Tab>
96+
</Tabs>
97+
98+
### Gemini via the Gemini protocol
99+
100+
<Tabs>
101+
<Tab title="YAML">
102+
```yaml title="config.yaml"
103+
models:
104+
- name: Gemini 2.5 Pro (OfoxAI)
105+
provider: gemini
106+
model: gemini-2.5-pro
107+
apiBase: https://api.ofox.ai/gemini
108+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
109+
```
110+
</Tab>
111+
<Tab title="JSON (Deprecated)">
112+
```json title="config.json"
113+
{
114+
"models": [
115+
{
116+
"title": "Gemini 2.5 Pro (OfoxAI)",
117+
"provider": "gemini",
118+
"model": "gemini-2.5-pro",
119+
"apiBase": "https://api.ofox.ai/gemini",
120+
"apiKey": "<YOUR_OFOXAI_API_KEY>"
121+
}
122+
]
123+
}
124+
```
125+
</Tab>
126+
</Tabs>
127+
128+
## Mixing Models for Different Roles
129+
130+
Because every OfoxAI request uses the same key, you can comfortably wire different upstream models to different Continue roles:
131+
132+
```yaml title="config.yaml"
133+
models:
134+
# Chat & Edit — high-quality model
135+
- name: Claude Sonnet (OfoxAI)
136+
provider: anthropic
137+
model: claude-sonnet-4-20250514
138+
apiBase: https://api.ofox.ai/anthropic
139+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
140+
roles:
141+
- chat
142+
- edit
143+
- apply
144+
145+
# Autocomplete — fast & cheap
146+
- name: DeepSeek Coder (OfoxAI)
147+
provider: openai
148+
model: deepseek-coder
149+
apiBase: https://api.ofox.ai/v1
150+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
151+
roles:
152+
- autocomplete
153+
154+
# Embeddings
155+
- name: text-embedding-3-large (OfoxAI)
156+
provider: openai
157+
model: text-embedding-3-large
158+
apiBase: https://api.ofox.ai/v1
159+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
160+
roles:
161+
- embed
162+
```
163+
164+
## Tool Use / Agent Mode
165+
166+
OfoxAI passes through native function calling for models that support it (GPT-4o, Claude, Gemini, Qwen, etc.). If a model's tool support isn't auto-detected by Continue, set it explicitly:
167+
168+
<Tabs>
169+
<Tab title="YAML">
170+
```yaml title="config.yaml"
171+
models:
172+
- name: GPT-4o (OfoxAI)
173+
provider: openai
174+
model: gpt-4o
175+
apiBase: https://api.ofox.ai/v1
176+
apiKey: ${{ secrets.OFOXAI_API_KEY }}
177+
capabilities:
178+
- tool_use
179+
```
180+
</Tab>
181+
<Tab title="JSON (Deprecated)">
182+
```json title="config.json"
183+
{
184+
"models": [
185+
{
186+
"title": "GPT-4o (OfoxAI)",
187+
"provider": "openai",
188+
"model": "gpt-4o",
189+
"apiBase": "https://api.ofox.ai/v1",
190+
"apiKey": "<YOUR_OFOXAI_API_KEY>",
191+
"capabilities": {
192+
"tools": true
193+
}
194+
}
195+
]
196+
}
197+
```
198+
</Tab>
199+
</Tabs>
200+
201+
## Pricing
202+
203+
OfoxAI offers a free tier for evaluation and pay-per-token usage thereafter. Token cost is charged at the upstream provider's published rate plus a small gateway fee. Detailed pricing and the live model catalog are available at [docs.ofox.ai](https://docs.ofox.ai).
204+
205+
<Note>
206+
Because Continue talks to OfoxAI through the standard OpenAI / Anthropic / Gemini providers, all of Continue's existing features — streaming, tool calls, prompt caching where supported — work out of the box.
207+
</Note>

docs/customize/model-providers/overview.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -34,6 +34,7 @@ Beyond the top-level providers, Continue supports many other options:
3434
| [Together AI](/customize/model-providers/more/together) | Platform for running a variety of open models |
3535
| [DeepInfra](/customize/model-providers/more/deepinfra) | Hosting for various open source models |
3636
| [OpenRouter](/customize/model-providers/top-level/openrouter) | Gateway to multiple model providers |
37+
| [OfoxAI](/customize/model-providers/more/ofoxai) | Unified LLM API gateway with OpenAI / Anthropic / Gemini protocol support |
3738
| [ClawRouter](/customize/model-providers/more/clawrouter) | Open-source LLM router with automatic cost-optimized model selection |
3839
| [Tetrate Agent Router Service](/customize/model-providers/top-level/tetrate_agent_router_service) | Gateway with intelligent routing across multiple model providers |
3940
| [Cohere](/customize/model-providers/more/cohere) | Models specialized for semantic search and text generation |

docs/docs.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -180,6 +180,7 @@
180180
"customize/model-providers/more/moonshot",
181181
"customize/model-providers/more/nous",
182182
"customize/model-providers/more/nvidia",
183+
"customize/model-providers/more/ofoxai",
183184
"customize/model-providers/more/tensorix",
184185
"customize/model-providers/more/together",
185186
"customize/model-providers/more/xAI",

0 commit comments

Comments
 (0)