Skip to content

feat: externalize model config to remote JSON with refresh support#943

Merged
stephanj merged 3 commits intomasterfrom
claude/externalize-model-config-SrOaI
Feb 19, 2026
Merged

feat: externalize model config to remote JSON with refresh support#943
stephanj merged 3 commits intomasterfrom
claude/externalize-model-config-SrOaI

Conversation

@stephanj
Copy link
Copy Markdown
Collaborator

Summary

  • Externalize cloud model definitions (names, costs, context windows) to a remote models.json hosted on genie.devoxx.com, fetched with a 24h TTL and plugin version check
  • Add ModelConfigService for background fetching/caching and ModelConfig/ModelConfigEntry DTOs
  • Extend LLMModelRegistryService to merge remote config over hardcoded fallback models
  • Enable the refresh button for cloud providers (Anthropic, OpenAI, Google, etc.) — previously only worked for local providers
  • Show a toast notification after refresh with the number of new/removed models
  • Add Gemini 3.1 Pro Preview to models.json

Test plan

  • Build plugin and verify no compilation errors
  • Select a cloud provider (e.g. Google), click refresh, verify models update and toast notification appears
  • Verify local providers (Ollama, LMStudio, etc.) refresh still works as before
  • Verify models.json is fetched on first launch and cached for subsequent sessions
  • Verify hardcoded fallback models are used when remote fetch fails

🤖 Generated with Claude Code

claude and others added 3 commits February 19, 2026 16:29
…ote JSON

Add a models.json file in docusaurus/static/api/ that externalizes all
cloud provider model definitions (names, display names, input/output costs,
context window sizes). This allows updating pricing or adding new models
without a plugin release.

Following the existing WelcomeContentService pattern:
- ModelConfig/ModelConfigEntry: Java models for the JSON structure
- ModelConfigService: fetches from genie.devoxx.com/api/models.json with
  24h cache TTL, persistent cache across IDE restarts, schema versioning
- LLMModelRegistryService: loads remote config on startup, falls back to
  hardcoded models for providers not in the remote config

https://claude.ai/code/session_01E1gdjeFfBQTLYT15oVRAto
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…ication

The refresh button now triggers an immediate HTTP reload of models.json for
cloud providers (Anthropic, OpenAI, Google, etc.) instead of showing a
"not available" notification. After refresh, a toast shows the number of
new/removed models so the user knows what changed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@stephanj stephanj merged commit 30a101e into master Feb 19, 2026
6 of 7 checks passed
@stephanj stephanj deleted the claude/externalize-model-config-SrOaI branch February 19, 2026 17:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants