feat: add Chutes AI as a model provider#12059
Open
het4rk wants to merge 1 commit intocontinuedev:mainfrom
Open
feat: add Chutes AI as a model provider#12059het4rk wants to merge 1 commit intocontinuedev:mainfrom
het4rk wants to merge 1 commit intocontinuedev:mainfrom
Conversation
Registers Chutes AI (https://chutes.ai) as a built-in provider. Chutes offers serverless AI inference with TEE models via an OpenAI-compatible API at https://llm.chutes.ai/v1/. - core/llm/llms/Chutes.ts: Provider class extending OpenAI - core/llm/llms/index.ts: Register in LLMClasses - core/llm/autodetect.ts: Add to autodetect list - packages/llm-info/src/providers/chutes.ts: Model info - packages/llm-info/src/index.ts: Register provider
Contributor
|
All contributors have signed the CLA ✍️ ✅ |
Author
|
I have read the CLA Document and I hereby sign the CLA |
Author
|
recheck |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Adds Chutes AI as a built-in model provider, following the same pattern as existing providers (Groq, DeepSeek, zAI, etc).
Chutes is a serverless AI inference provider with TEE (Trusted Execution Environment) models. OpenAI-compatible API at https://llm.chutes.ai/v1/.
Changes
5 files, 47 lines added (all additive, no breaking changes):
Configuration
In config.yaml:
models:
model: chutes/DeepSeek-V3.2-TEE
apiKey: your-chutes-api-key
Provider Details
Provider ID: chutes
Base URL: https://llm.chutes.ai/v1/
API Type: OpenAI-compatible
Auth: API key (CHUTES_API_KEY env var or apiKey in config)
Summary by cubic
Adds Chutes AI as a built-in, OpenAI-compatible provider to enable secure TEE models out of the box.
chutesprovider with basehttps://llm.chutes.ai/v1/(extends OpenAI client).packages/llm-info:chutes/DeepSeek-V3.2-TEE,chutes/Qwen3-32B-TEE,chutes/Kimi-K2.5-TEE.provider: chutes, select a model above, and setCHUTES_API_KEYor anapiKey.Written for commit 8212002. Summary will update on new commits.