| sidebar_position | 1.6 |
|---|---|
| id | waveai-modes |
| title | Wave AI (Local Models + BYOK) |
Wave AI supports custom AI modes that allow you to use local models, custom API endpoints, and alternative AI providers. This gives you complete control over which models and providers you use with Wave's AI features.
AI modes are configured in ~/.config/waveterm/waveai.json.
To edit using the UI:
- Click the settings (gear) icon in the widget bar
- Select "Settings" from the menu
- Choose "Wave AI Modes" from the settings sidebar
Or edit from the command line:
wsh editconfig waveai.jsonEach mode defines a complete AI configuration including the model, API endpoint, authentication, and display properties.
Wave AI now supports provider-based configuration which automatically applies sensible defaults for common providers. By specifying the ai:provider field, you can significantly simplify your configuration as the system will automatically set up endpoints, API types, and secret names.
openai- OpenAI API (automatically configures endpoint and secret name)openrouter- OpenRouter API (automatically configures endpoint and secret name)google- Google AI (Gemini)azure- Azure OpenAI Service (modern API)azure-legacy- Azure OpenAI Service (legacy deployment API)custom- Custom API endpoint (fully manual configuration)
Wave AI supports the following API types:
openai-chat: Uses the/v1/chat/completionsendpoint (most common)openai-responses: Uses the/v1/responsesendpoint (modern API for GPT-5+ models)google-gemini: Google's Gemini API format (automatically set when usingai:provider: "google", not typically used directly)
{
"mode-key": {
"display:name": "Qwen (OpenRouter)",
"ai:provider": "openrouter",
"ai:model": "qwen/qwen-2.5-coder-32b-instruct"
}
}{
"mode-key": {
"display:name": "Display Name",
"display:order": 1,
"display:icon": "icon-name",
"display:description": "Full description",
"ai:provider": "custom",
"ai:apitype": "openai-chat",
"ai:model": "model-name",
"ai:thinkinglevel": "medium",
"ai:endpoint": "http://localhost:11434/v1/chat/completions",
"ai:azureapiversion": "v1",
"ai:apitoken": "your-token",
"ai:apitokensecretname": "PROVIDER_KEY",
"ai:azureresourcename": "your-resource",
"ai:azuredeployment": "your-deployment",
"ai:capabilities": ["tools", "images", "pdfs"]
}
}| Field | Required | Description |
|---|---|---|
display:name |
Yes | Name shown in the AI mode selector |
display:order |
No | Sort order in the selector (lower numbers first) |
display:icon |
No | Icon identifier for the mode |
display:description |
No | Full description of the mode |
ai:provider |
No | Provider preset: openai, openrouter, google, azure, azure-legacy, custom |
ai:apitype |
No | API type: openai-chat, openai-responses, or google-gemini (defaults to openai-chat if not specified) |
ai:model |
No | Model identifier (required for most providers) |
ai:thinkinglevel |
No | Thinking level: low, medium, or high |
ai:endpoint |
No | Full API endpoint URL (auto-set by provider when available) |
ai:azureapiversion |
No | Azure API version (for azure-legacy provider, defaults to 2025-04-01-preview) |
ai:apitoken |
No | API key/token (not recommended - use secrets instead) |
ai:apitokensecretname |
No | Name of secret containing API token (auto-set by provider) |
ai:azureresourcename |
No | Azure resource name (for Azure providers) |
ai:azuredeployment |
No | Azure deployment name (for azure-legacy provider) |
ai:capabilities |
No | Array of supported capabilities: "tools", "images", "pdfs" |
waveai:cloud |
No | Internal - for Wave Cloud AI configuration only |
waveai:premium |
No | Internal - for Wave Cloud AI configuration only |
The ai:capabilities field specifies what features the AI mode supports:
tools- Enables AI tool usage for file reading/writing, shell integration, and widget interactionimages- Allows image attachments in chat (model can view uploaded images)pdfs- Allows PDF file attachments in chat (model can read PDF content)
Provider-specific behavior:
- OpenAI and Google providers: Capabilities are automatically configured based on the model. You don't need to specify them.
- OpenRouter, Azure, Azure-Legacy, and Custom providers: You must manually specify capabilities based on your model's features.
:::warning
If you don't include "tools" in the ai:capabilities array, the AI model will not be able to interact with your Wave terminal widgets, read/write files, or execute commands. Most AI modes should include "tools" for the best Wave experience.
:::
Most models support tools and can benefit from it. Vision-capable models should include images. Not all models support PDFs, so only include pdfs if your model can process them.
Ollama provides an OpenAI-compatible API for running models locally:
{
"ollama-llama": {
"display:name": "Ollama - Llama 3.3",
"display:order": 1,
"display:icon": "llama",
"display:description": "Local Llama 3.3 70B model via Ollama",
"ai:apitype": "openai-chat",
"ai:model": "llama3.3:70b",
"ai:thinkinglevel": "medium",
"ai:endpoint": "http://localhost:11434/v1/chat/completions",
"ai:apitoken": "ollama"
}
}:::tip
The ai:apitoken field is required but Ollama ignores it - you can set it to any value like "ollama".
:::
LM Studio provides a local server that can run various models:
{
"lmstudio-qwen": {
"display:name": "LM Studio - Qwen",
"display:order": 2,
"display:icon": "server",
"display:description": "Local Qwen model via LM Studio",
"ai:apitype": "openai-chat",
"ai:model": "qwen/qwen-2.5-coder-32b-instruct",
"ai:thinkinglevel": "medium",
"ai:endpoint": "http://localhost:1234/v1/chat/completions",
"ai:apitoken": "not-needed"
}
}vLLM is a high-performance inference server with OpenAI API compatibility:
{
"vllm-local": {
"display:name": "vLLM",
"display:order": 3,
"display:icon": "server",
"display:description": "Local model via vLLM",
"ai:apitype": "openai-chat",
"ai:model": "your-model-name",
"ai:thinkinglevel": "medium",
"ai:endpoint": "http://localhost:8000/v1/chat/completions",
"ai:apitoken": "not-needed"
}
}Using the openai provider automatically configures the endpoint and secret name:
{
"openai-gpt4o": {
"display:name": "GPT-4o",
"ai:provider": "openai",
"ai:model": "gpt-4o"
}
}The provider automatically sets:
ai:endpointtohttps://api.openai.com/v1/chat/completionsai:apitypetoopenai-chat(oropenai-responsesfor GPT-5+ models)ai:apitokensecretnametoOPENAI_KEY(store your OpenAI API key with this name)ai:capabilitiesto["tools", "images", "pdfs"](automatically determined based on model)
For newer models like GPT-4.1 or GPT-5, the API type is automatically determined:
{
"openai-gpt41": {
"display:name": "GPT-4.1",
"ai:provider": "openai",
"ai:model": "gpt-4.1"
}
}OpenRouter provides access to multiple AI models. Using the openrouter provider simplifies configuration:
{
"openrouter-qwen": {
"display:name": "OpenRouter - Qwen",
"ai:provider": "openrouter",
"ai:model": "qwen/qwen-2.5-coder-32b-instruct"
}
}The provider automatically sets:
ai:endpointtohttps://openrouter.ai/api/v1/chat/completionsai:apitypetoopenai-chatai:apitokensecretnametoOPENROUTER_KEY(store your OpenRouter API key with this name)
:::note
For OpenRouter, you must manually specify ai:capabilities based on your model's features. Example:
{
"openrouter-qwen": {
"display:name": "OpenRouter - Qwen",
"ai:provider": "openrouter",
"ai:model": "qwen/qwen-2.5-coder-32b-instruct",
"ai:capabilities": ["tools"]
}
}:::
Google AI provides the Gemini family of models. Using the google provider simplifies configuration:
{
"google-gemini": {
"display:name": "Gemini 3 Pro",
"ai:provider": "google",
"ai:model": "gemini-3-pro-preview"
}
}The provider automatically sets:
ai:endpointtohttps://generativelanguage.googleapis.com/v1beta/models/{model}:streamGenerateContentai:apitypetogoogle-geminiai:apitokensecretnametoGOOGLE_AI_KEY(store your Google AI API key with this name)ai:capabilitiesto["tools", "images", "pdfs"](automatically configured)
For the modern Azure OpenAI API, use the azure provider:
{
"azure-gpt4": {
"display:name": "Azure GPT-4",
"ai:provider": "azure",
"ai:model": "gpt-4",
"ai:azureresourcename": "your-resource-name"
}
}The provider automatically sets:
ai:endpointtohttps://your-resource-name.openai.azure.com/openai/v1/chat/completions(or/responsesfor newer models)ai:apitypebased on the modelai:apitokensecretnametoAZURE_OPENAI_KEY(store your Azure OpenAI key with this name)
:::note
For Azure providers, you must manually specify ai:capabilities based on your model's features. Example:
{
"azure-gpt4": {
"display:name": "Azure GPT-4",
"ai:provider": "azure",
"ai:model": "gpt-4",
"ai:azureresourcename": "your-resource-name",
"ai:capabilities": ["tools", "images"]
}
}:::
For legacy Azure deployments, use the azure-legacy provider:
{
"azure-legacy-gpt4": {
"display:name": "Azure GPT-4 (Legacy)",
"ai:provider": "azure-legacy",
"ai:azureresourcename": "your-resource-name",
"ai:azuredeployment": "your-deployment-name"
}
}The provider automatically constructs the full endpoint URL and sets the API version (defaults to 2025-04-01-preview). You can override the API version with ai:azureapiversion if needed.
:::note
For Azure Legacy provider, you must manually specify ai:capabilities based on your model's features.
:::
Instead of storing API keys directly in the configuration, you should use Wave's secret store to keep your credentials secure. Secrets are stored encrypted using your system's native keychain.
Using the Secrets UI (recommended):
- Click the settings (gear) icon in the widget bar
- Select "Secrets" from the menu
- Click "Add New Secret"
- Enter the secret name (e.g.,
OPENAI_API_KEY) and your API key - Click "Save"
Or from the command line:
wsh secret set OPENAI_KEY=sk-xxxxxxxxxxxxxxxx
wsh secret set OPENROUTER_KEY=sk-xxxxxxxxxxxxxxxxWhen using providers like openai or openrouter, the secret name is automatically set. Just ensure the secret exists with the correct name:
{
"my-openai-mode": {
"display:name": "OpenAI GPT-4o",
"ai:provider": "openai",
"ai:model": "gpt-4o"
}
}The openai provider automatically looks for the OPENAI_KEY secret. See the Secrets documentation for more information on managing secrets securely in Wave.
You can define multiple AI modes and switch between them easily:
{
"ollama-llama": {
"display:name": "Ollama - Llama 3.3",
"display:order": 1,
"ai:model": "llama3.3:70b",
"ai:endpoint": "http://localhost:11434/v1/chat/completions",
"ai:apitoken": "ollama"
},
"ollama-codellama": {
"display:name": "Ollama - CodeLlama",
"display:order": 2,
"ai:model": "codellama:34b",
"ai:endpoint": "http://localhost:11434/v1/chat/completions",
"ai:apitoken": "ollama"
},
"openai-gpt4o": {
"display:name": "GPT-4o",
"display:order": 10,
"ai:provider": "openai",
"ai:model": "gpt-4o"
}
}If Wave can't connect to your model server:
- For cloud providers with
ai:providerset: Ensure you have the correct secret stored (e.g.,OPENAI_KEY,OPENROUTER_KEY) - For local/custom endpoints: Verify the server is running (
curl http://localhost:11434/v1/modelsfor Ollama) - Check the
ai:endpointis the complete endpoint URL including the path (e.g.,http://localhost:11434/v1/chat/completions) - Verify the
ai:apitypematches your server's API (defaults are usually correct when using providers) - Check firewall settings if using a non-localhost address
If you get "model not found" errors:
- Verify the model name matches exactly what your server expects
- For Ollama, use
ollama listto see available models - Some servers require prefixes or specific naming formats
- The API type defaults to
openai-chatif not specified, which works for most providers - Use
openai-chatfor Ollama, LM Studio, custom endpoints, and most cloud providers - Use
openai-responsesfor newer OpenAI models (GPT-5+) or when your provider specifically requires it - Provider presets automatically set the correct API type when needed