feat: support custom OpenAI-compatible API endpoints#2135
feat: support custom OpenAI-compatible API endpoints#2135GermanBluefox merged 22 commits intoioBroker:masterfrom
Conversation
…io, etc.) Add configurable base URL and custom model name to allow using any OpenAI-compatible API provider. Models are now fetched dynamically from the configured API endpoint instead of a hardcoded list. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
… failures Show user-friendly error messages when the API endpoint is unreachable, API key is invalid, or model is not found. Add retry button for failed model loading. Includes translations for all 11 languages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add sendTo handler 'testApiConnection' in main.ts for server-side API connectivity testing - Add test button in admin jsonConfig using sendTo - Add WORK IN PROGRESS changelog entry in README.md - Add documentation for custom API support in docs/en and docs/de Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The custom model name config field is unnecessary since available models are now fetched dynamically from the API endpoint and shown in the dropdown. Removed from config, i18n, dialog logic and documentation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The sendTo handler was only in src/main.ts but the compiled build/main.js was missing it, causing the test button to spin indefinitely. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Add 'chatCompletion' sendTo handler for proxying chat requests - Model loading already uses 'testApiConnection' sendTo handler - Remove OpenAI SDK usage from browser (no more dangerouslyAllowBrowser) - All API communication now goes through the adapter backend, which avoids CORS issues with local providers like Ollama Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Fix error state type from boolean to string | false - Fix disabled prop to use !!error for boolean coercion - Rebuild frontend assets with server-side API changes Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The frontend build process incorrectly deleted tab.html and the custom components directory, breaking the script editor tab. Restore original build artifacts - source changes remain in src-editor for future builds. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The automated build task failed on Node 25 due to deprecated rmdirSync. Built editor with vite directly and manually copied assets + generated tab.html with socket.io and monaco script injection. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…requests Local LLMs (Ollama) need Content-Length to process the request body correctly and more time to generate responses with large prompts. Also explicitly set stream: false for compatibility. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Local models (Ollama) often output <think> blocks, <|endoftext|> tokens, and reasoning commentary alongside code. The response parser now: - Strips <think>...</think> blocks - Strips special tokens (<|endoftext|>, <|im_start|>, <|im_end|>) - Intelligently extracts code from unstructured responses - Removes trailing LLM commentary after code Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…t ChatGPT - Add Google Gemini as recommended free provider with setup instructions - Add DeepSeek as affordable alternative - Add guidance for local models (14B+ recommended) - Note that free OpenAI/ChatGPT API no longer works for code generation Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Google's API returns model IDs like 'models/gemini-2.0-flash' but the chat completions endpoint expects just 'gemini-2.0-flash'. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Remove old OpenAiDialog/ScriptEditor assets no longer referenced - Update WORK IN PROGRESS changelog (remove 'custom model name', add server-side proxy and artifact stripping) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Bug: Empty API key rejected for local providers (Ollama, LM Studio)When using a local provider like Ollama that doesn't require authentication, the adapter currently rejects the request with Affected code in if (!apiKey) {
this.sendTo(obj.from, obj.command, { error: 'No API key provided' }, obj.callback);
break;
}Workaround: Users can enter any dummy string (e.g. Suggested fix: Allow empty API key when a custom if (!apiKey && !baseUrl) {
this.sendTo(obj.from, obj.command, { error: 'No API key provided' }, obj.callback);
break;
}Tested with Ollama |
Local providers (Ollama, LM Studio) don't require authentication. The API key check now only rejects requests when no key AND no custom base URL is set. The Authorization header is omitted entirely when no key is configured. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@Eistee82 Have you tried Models? E.g. babbage-002 delivers: This is not a chat model and thus not supported in the v1/chat/completions endpoint. Did you mean to use v1/completions? |
|
@Eistee82 Thank you for updating AI agents. The version is now in build. |
|
Summary
<think>,<|endoftext|>, etc.) from responses for local modelsChanges
src/main.ts— AddtestApiConnectionandchatCompletionsendTo handlerssrc-editor/src/OpenAi/OpenAiDialog.tsx— Refactor to use server-side sendTo instead of browser-side OpenAI SDK, add dynamic model loading, error handling, response cleanupadmin/jsonConfig.json— Add base URL field and test connection buttonio-package.json— AddgptBaseUrlnative configadmin/i18n/*.json— Translations for 3 new admin UI keys (11 languages)src-editor/src/i18n/*.json— Translations for 8 new editor UI keys (11 languages)docs/en/README.md,docs/de/README.md— Full documentation with provider recommendationsTest plan
https://generativelanguage.googleapis.com/v1beta/openai) — models load dynamically, code generation workshttp://localhost:11434/v1) — models load, code generation works🤖 Generated with Claude Code