Skip to content

Commit 5a23ba8

Browse files
committed
update docs
1 parent 8f965e7 commit 5a23ba8

1 file changed

Lines changed: 77 additions & 14 deletions

File tree

docs/docs/waveai-modes.mdx

Lines changed: 77 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
sidebar_position: 1.6
33
id: "waveai-modes"
4-
title: "Wave AI (Local Models)"
4+
title: "Wave AI (Local Models + BYOK)"
55
---
66

77
Wave AI supports custom AI modes that allow you to use local models, custom API endpoints, and alternative AI providers. This gives you complete control over which models and providers you use with Wave's AI features.
@@ -37,10 +37,11 @@ Wave AI now supports provider-based configuration which automatically applies se
3737

3838
### Supported API Types
3939

40-
Wave AI supports two OpenAI-compatible API types:
40+
Wave AI supports the following API types:
4141

4242
- **`openai-chat`**: Uses the `/v1/chat/completions` endpoint (most common)
4343
- **`openai-responses`**: Uses the `/v1/responses` endpoint (modern API for GPT-5+ models)
44+
- **`google-gemini`**: Google's Gemini API format (automatically set when using `ai:provider: "google"`, not typically used directly)
4445

4546
## Configuration Structure
4647

@@ -49,7 +50,7 @@ Wave AI supports two OpenAI-compatible API types:
4950
```json
5051
{
5152
"mode-key": {
52-
"display:name": "Display Name",
53+
"display:name": "Qwen (OpenRouter)",
5354
"ai:provider": "openrouter",
5455
"ai:model": "qwen/qwen-2.5-coder-32b-instruct"
5556
}
@@ -89,10 +90,10 @@ Wave AI supports two OpenAI-compatible API types:
8990
| `display:icon` | No | Icon identifier for the mode |
9091
| `display:description` | No | Full description of the mode |
9192
| `ai:provider` | No | Provider preset: `openai`, `openrouter`, `google`, `azure`, `azure-legacy`, `custom` |
92-
| `ai:apitype` | No | API type: `openai-chat` or `openai-responses` (defaults to `openai-chat` if not specified) |
93+
| `ai:apitype` | No | API type: `openai-chat`, `openai-responses`, or `google-gemini` (defaults to `openai-chat` if not specified) |
9394
| `ai:model` | No | Model identifier (required for most providers) |
9495
| `ai:thinkinglevel` | No | Thinking level: `low`, `medium`, or `high` |
95-
| `ai:endpoint` | No | Full API endpoint URL (auto-set by provider when available) |
96+
| `ai:endpoint` | No | *Full* API endpoint URL (auto-set by provider when available) |
9697
| `ai:azureapiversion` | No | Azure API version (for `azure-legacy` provider, defaults to `2025-04-01-preview`) |
9798
| `ai:apitoken` | No | API key/token (not recommended - use secrets instead) |
9899
| `ai:apitokensecretname` | No | Name of secret containing API token (auto-set by provider) |
@@ -110,6 +111,14 @@ The `ai:capabilities` field specifies what features the AI mode supports:
110111
- **`images`** - Allows image attachments in chat (model can view uploaded images)
111112
- **`pdfs`** - Allows PDF file attachments in chat (model can read PDF content)
112113

114+
**Provider-specific behavior:**
115+
- **OpenAI and Google providers**: Capabilities are automatically configured based on the model. You don't need to specify them.
116+
- **OpenRouter, Azure, Azure-Legacy, and Custom providers**: You must manually specify capabilities based on your model's features.
117+
118+
:::warning
119+
If you don't include `"tools"` in the `ai:capabilities` array, the AI model will not be able to interact with your Wave terminal widgets, read/write files, or execute commands. Most AI modes should include `"tools"` for the best Wave experience.
120+
:::
121+
113122
Most models support `tools` and can benefit from it. Vision-capable models should include `images`. Not all models support PDFs, so only include `pdfs` if your model can process them.
114123

115124
## Local Model Examples
@@ -127,7 +136,7 @@ Most models support `tools` and can benefit from it. Vision-capable models shoul
127136
"display:description": "Local Llama 3.3 70B model via Ollama",
128137
"ai:apitype": "openai-chat",
129138
"ai:model": "llama3.3:70b",
130-
"ai:thinkinglevel": "normal",
139+
"ai:thinkinglevel": "medium",
131140
"ai:endpoint": "http://localhost:11434/v1/chat/completions",
132141
"ai:apitoken": "ollama"
133142
}
@@ -151,28 +160,28 @@ The `ai:apitoken` field is required but Ollama ignores it - you can set it to an
151160
"display:description": "Local Qwen model via LM Studio",
152161
"ai:apitype": "openai-chat",
153162
"ai:model": "qwen/qwen-2.5-coder-32b-instruct",
154-
"ai:thinkinglevel": "normal",
163+
"ai:thinkinglevel": "medium",
155164
"ai:endpoint": "http://localhost:1234/v1/chat/completions",
156165
"ai:apitoken": "not-needed"
157166
}
158167
}
159168
```
160169

161-
### Jan
170+
### vLLM
162171

163-
[Jan](https://jan.ai) is another local AI runtime with OpenAI API compatibility:
172+
[vLLM](https://docs.vllm.ai) is a high-performance inference server with OpenAI API compatibility:
164173

165174
```json
166175
{
167-
"jan-local": {
168-
"display:name": "Jan",
176+
"vllm-local": {
177+
"display:name": "vLLM",
169178
"display:order": 3,
170179
"display:icon": "server",
171-
"display:description": "Local model via Jan",
180+
"display:description": "Local model via vLLM",
172181
"ai:apitype": "openai-chat",
173182
"ai:model": "your-model-name",
174-
"ai:thinkinglevel": "normal",
175-
"ai:endpoint": "http://localhost:1337/v1/chat/completions",
183+
"ai:thinkinglevel": "medium",
184+
"ai:endpoint": "http://localhost:8000/v1/chat/completions",
176185
"ai:apitoken": "not-needed"
177186
}
178187
}
@@ -198,6 +207,7 @@ The provider automatically sets:
198207
- `ai:endpoint` to `https://api.openai.com/v1/chat/completions`
199208
- `ai:apitype` to `openai-chat` (or `openai-responses` for GPT-5+ models)
200209
- `ai:apitokensecretname` to `OPENAI_KEY` (store your OpenAI API key with this name)
210+
- `ai:capabilities` to `["tools", "images", "pdfs"]` (automatically determined based on model)
201211

202212
For newer models like GPT-4.1 or GPT-5, the API type is automatically determined:
203213

@@ -230,6 +240,40 @@ The provider automatically sets:
230240
- `ai:apitype` to `openai-chat`
231241
- `ai:apitokensecretname` to `OPENROUTER_KEY` (store your OpenRouter API key with this name)
232242

243+
:::note
244+
For OpenRouter, you must manually specify `ai:capabilities` based on your model's features. Example:
245+
```json
246+
{
247+
"openrouter-qwen": {
248+
"display:name": "OpenRouter - Qwen",
249+
"ai:provider": "openrouter",
250+
"ai:model": "qwen/qwen-2.5-coder-32b-instruct",
251+
"ai:capabilities": ["tools"]
252+
}
253+
}
254+
```
255+
:::
256+
257+
### Google AI (Gemini)
258+
259+
[Google AI](https://ai.google.dev) provides the Gemini family of models. Using the `google` provider simplifies configuration:
260+
261+
```json
262+
{
263+
"google-gemini": {
264+
"display:name": "Gemini 3 Pro",
265+
"ai:provider": "google",
266+
"ai:model": "gemini-3-pro-preview"
267+
}
268+
}
269+
```
270+
271+
The provider automatically sets:
272+
- `ai:endpoint` to `https://generativelanguage.googleapis.com/v1beta/models/{model}:streamGenerateContent`
273+
- `ai:apitype` to `google-gemini`
274+
- `ai:apitokensecretname` to `GOOGLE_AI_KEY` (store your Google AI API key with this name)
275+
- `ai:capabilities` to `["tools", "images", "pdfs"]` (automatically configured)
276+
233277
### Azure OpenAI (Modern API)
234278

235279
For the modern Azure OpenAI API, use the `azure` provider:
@@ -250,6 +294,21 @@ The provider automatically sets:
250294
- `ai:apitype` based on the model
251295
- `ai:apitokensecretname` to `AZURE_OPENAI_KEY` (store your Azure OpenAI key with this name)
252296

297+
:::note
298+
For Azure providers, you must manually specify `ai:capabilities` based on your model's features. Example:
299+
```json
300+
{
301+
"azure-gpt4": {
302+
"display:name": "Azure GPT-4",
303+
"ai:provider": "azure",
304+
"ai:model": "gpt-4",
305+
"ai:azureresourcename": "your-resource-name",
306+
"ai:capabilities": ["tools", "images"]
307+
}
308+
}
309+
```
310+
:::
311+
253312
### Azure OpenAI (Legacy Deployment API)
254313

255314
For legacy Azure deployments, use the `azure-legacy` provider:
@@ -267,6 +326,10 @@ For legacy Azure deployments, use the `azure-legacy` provider:
267326

268327
The provider automatically constructs the full endpoint URL and sets the API version (defaults to `2025-04-01-preview`). You can override the API version with `ai:azureapiversion` if needed.
269328

329+
:::note
330+
For Azure Legacy provider, you must manually specify `ai:capabilities` based on your model's features.
331+
:::
332+
270333
## Using Secrets for API Keys
271334

272335
Instead of storing API keys directly in the configuration, you should use Wave's secret store to keep your credentials secure. Secrets are stored encrypted using your system's native keychain.

0 commit comments

Comments
 (0)