altimate supports 35+ LLM providers. Configure them in the provider section of your config file.
Each provider has a key in the provider object:
{
"provider": {
"<provider-name>": {
"apiKey": "{env:API_KEY}",
"baseURL": "https://custom.endpoint.com/v1",
"headers": {
"X-Custom-Header": "value"
}
}
}
}!!! tip
Use {env:...} substitution for API keys so you never commit secrets to version control.
Managed LLM access with dynamic routing across Sonnet 4.6, Opus 4.6, GPT-5.4, GPT-5.3, and more. No API keys to manage — 10M tokens free to get started.
{
"provider": {
"altimate": {}
},
"model": "altimate/auto"
}For pricing, security, and data handling details, see the Altimate LLM Gateway guide.
{
"provider": {
"anthropic": {
"apiKey": "{env:ANTHROPIC_API_KEY}"
}
},
"model": "anthropic/claude-sonnet-4-6"
}Available models: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5-20251001
{
"provider": {
"openai": {
"apiKey": "{env:OPENAI_API_KEY}"
}
},
"model": "openai/gpt-4o"
}{
"provider": {
"bedrock": {
"region": "us-east-1",
"accessKeyId": "{env:AWS_ACCESS_KEY_ID}",
"secretAccessKey": "{env:AWS_SECRET_ACCESS_KEY}"
}
},
"model": "bedrock/anthropic.claude-sonnet-4-6-v1"
}Uses the standard AWS credential chain. Set AWS_PROFILE or provide credentials directly.
!!! note If you have AWS SSO or IAM roles configured, Bedrock will use your default credential chain automatically, so no explicit keys are needed.
{
"provider": {
"azure": {
"apiKey": "{env:AZURE_OPENAI_API_KEY}",
"baseURL": "https://your-resource.openai.azure.com/openai/deployments/your-deployment"
}
},
"model": "azure/gpt-4o"
}{
"provider": {
"google": {
"apiKey": "{env:GOOGLE_API_KEY}"
}
},
"model": "google/gemini-2.5-pro"
}{
"provider": {
"google-vertex": {
"project": "my-gcp-project",
"location": "us-central1"
}
},
"model": "google-vertex/gemini-2.5-pro"
}Uses Google Cloud Application Default Credentials. Authenticate with:
gcloud auth application-default loginThe project and location fields can also be set via environment variables:
| Field | Environment Variables (checked in order) |
|---|---|
project |
GOOGLE_CLOUD_PROJECT, GCP_PROJECT, GCLOUD_PROJECT |
location |
GOOGLE_VERTEX_LOCATION, GOOGLE_CLOUD_LOCATION, VERTEX_LOCATION |
If location is not set, it defaults to us-central1.
!!! tip
You can also access Anthropic models through Vertex AI using the google-vertex provider (e.g., google-vertex/claude-sonnet-4-6).
{
"provider": {
"ollama": {
"baseURL": "http://localhost:11434"
}
},
"model": "ollama/llama3.1"
}No API key needed. Runs entirely on your local machine.
!!! info
Make sure Ollama is running before starting altimate. Install it from ollama.com and pull your desired model with ollama pull llama3.1.
Run local models through LM Studio's OpenAI-compatible server:
{
"provider": {
"lmstudio": {
"name": "LM Studio",
"npm": "@ai-sdk/openai-compatible",
"env": ["LMSTUDIO_API_KEY"],
"options": {
"apiKey": "lm-studio",
"baseURL": "http://localhost:1234/v1"
},
"models": {
"qwen2.5-7b-instruct": {
"name": "Qwen 2.5 7B Instruct",
"tool_call": true,
"limit": { "context": 131072, "output": 8192 }
}
}
}
},
"model": "lmstudio/qwen2.5-7b-instruct"
}Setup:
- Open LM Studio → Developer tab → Start Server (default port: 1234)
- Load a model in LM Studio
- Find your model ID:
curl http://localhost:1234/v1/models - Add the model ID to the
modelssection in your config - Use it:
altimate-code run -m lmstudio/<model-id>
!!! tip
The model key in your config must match the model ID returned by LM Studio's /v1/models endpoint. If you change models in LM Studio, update the config to match.
!!! note
If you changed LM Studio's default port, update the baseURL accordingly. No real API key is needed — the "lm-studio" placeholder satisfies the SDK requirement.
{
"provider": {
"openrouter": {
"apiKey": "{env:OPENROUTER_API_KEY}"
}
},
"model": "openrouter/anthropic/claude-sonnet-4-6"
}Access 150+ models through a single API key.
{
"provider": {
"copilot": {}
},
"model": "copilot/gpt-4o"
}Uses your GitHub Copilot subscription. Authenticate with altimate auth.
!!! note "Codespaces & GitHub Actions"
In GitHub Codespaces and GitHub Actions, the machine-scoped GITHUB_TOKEN lacks models:read permission and cannot be used for GitHub Copilot or GitHub Models inference. altimate automatically skips these providers in machine environments. To use them, authenticate explicitly with altimate auth or set a personal access token with models:read scope as a Codespace secret.
{
"provider": {
"snowflake-cortex": {}
},
"model": "snowflake-cortex/claude-sonnet-4-6"
}Authenticate with altimate auth snowflake-cortex using a Programmatic Access Token (PAT). Enter credentials as account-identifier::pat-token.
Create a PAT in Snowsight: Admin > Security > Programmatic Access Tokens.
Billing flows through your Snowflake credits — no per-token costs.
Available models:
| Model | Tool Calling |
|---|---|
claude-sonnet-4-6, claude-opus-4-6, claude-sonnet-4-5, claude-opus-4-5, claude-haiku-4-5, claude-4-sonnet, claude-3-7-sonnet, claude-3-5-sonnet |
Yes |
openai-gpt-4.1, openai-gpt-5, openai-gpt-5-mini, openai-gpt-5-nano, openai-gpt-5-chat |
Yes |
llama4-maverick, snowflake-llama-3.3-70b, llama3.1-70b, llama3.1-405b, llama3.1-8b |
No |
mistral-large, mistral-large2, mistral-7b |
No |
deepseek-r1 |
No |
!!! note
Model availability depends on your Snowflake region. Enable cross-region inference with ALTER ACCOUNT SET CORTEX_ENABLED_CROSS_REGION = 'ANY_REGION' for full model access.
Any OpenAI-compatible endpoint can be used as a provider:
{
"provider": {
"my-provider": {
"api": "openai",
"baseURL": "https://my-llm-proxy.example.com/v1",
"apiKey": "{env:MY_API_KEY}"
}
},
"model": "my-provider/my-model"
}!!! tip This works with any service that exposes an OpenAI-compatible chat completions API, including vLLM, LiteLLM, and self-hosted inference servers.
Set your default model and a smaller model for lightweight tasks:
{
"model": "anthropic/claude-sonnet-4-6",
"small_model": "anthropic/claude-haiku-4-5-20251001"
}The small_model is used for lightweight tasks like summarization and context compaction.
| Field | Type | Description |
|---|---|---|
apiKey |
string |
API key (supports {env:...} and {file:...}) |
baseURL |
string |
Custom API endpoint URL |
api |
string |
API type (e.g., "openai" for compatible endpoints) |
headers |
object |
Custom HTTP headers to include with requests |
region |
string |
AWS region (Bedrock only) |
accessKeyId |
string |
AWS access key (Bedrock only) |
secretAccessKey |
string |
AWS secret key (Bedrock only) |
project |
string |
GCP project ID (Google Vertex AI only) |
location |
string |
GCP region (Google Vertex AI only, default: us-central1) |