Skip to content

feat: add MiniMax as first-class LLM provider#140

Open
octo-patch wants to merge 1 commit intonicepkg:masterfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as first-class LLM provider#140
octo-patch wants to merge 1 commit intonicepkg:masterfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

Add MiniMax AI as a dedicated model provider for the Aide VSCode extension, alongside the existing OpenAI, Anthropic, and Azure OpenAI providers.

Changes

  • New provider: MiniMaxModelProvider extending BaseModelProvider via ChatOpenAI from @langchain/openai
    • Temperature clamping to (0.0, 1.0] as required by MiniMax API
    • Clearing unsupported OpenAI-specific parameters (frequencyPenalty, n, presencePenalty, topP)
    • Default base URL fallback to https://api.minimax.io/v1
  • URL routing: Added minimax to ModelUrlType union and URL parsing regex in parse-model-base-url.ts
  • Provider registration: Added MiniMaxModelProvider to the provider map in helpers.ts
  • Documentation: Added MiniMax usage guide pages for both English and Chinese
  • Sidebar: Added MiniMax navigation entries in VitePress config for both locales
  • Tests: 30 unit tests covering URL parsing, temperature clamping, provider mapping, model validation, and configuration format

Supported Models

Model Context Window Description
MiniMax-M2.7 1M tokens Latest flagship model
MiniMax-M2.7-highspeed 1M tokens Faster variant
MiniMax-M2.5 204K tokens Previous generation
MiniMax-M2.5-highspeed 204K tokens Faster variant

Usage

Configure in VSCode settings:

{
  "aide.openaiBaseUrl": "minimax@https://api.minimax.io/v1",
  "aide.openaiKey": "your-minimax-api-key",
  "aide.openaiModel": "MiniMax-M2.7"
}

The minimax@ prefix ensures the extension handles MiniMax-specific configurations (temperature constraints, parameter compatibility) correctly.

Files Changed (10 files, 413 additions)

File Description
src/extension/ai/model-providers/minimax.ts New MiniMax provider
src/extension/ai/parse-model-base-url.ts Add minimax to URL type union and regex
src/extension/ai/helpers.ts Register MiniMax in provider map
website/en/guide/use-another-llm/minimax.md English documentation
website/zh/guide/use-another-llm/minimax.md Chinese documentation
website/.vitepress/config/en.ts English sidebar entry
website/.vitepress/config/zh.ts Chinese sidebar entry
test/minimax.test.ts 30 unit tests

Test Plan

  • All 30 new unit tests pass (npx vitest run)
  • Existing tests remain passing (31 total)
  • ESLint passes with no errors
  • Manual testing with MiniMax API key and M2.7 model

Add MiniMax AI as a dedicated model provider with:
- MiniMaxModelProvider extending BaseModelProvider via ChatOpenAI
- Temperature clamping to (0.0, 1.0] as required by MiniMax API
- Clearing unsupported OpenAI-specific parameters
- Default base URL fallback to https://api.minimax.io/v1
- minimax@ URL prefix support in parseModelBaseUrl
- Documentation pages for both EN and ZH
- Sidebar navigation entries in VitePress config
- 30 unit tests covering URL parsing, temperature clamping,
  provider mapping, model validation, and configuration format

Supported models: MiniMax-M2.7, MiniMax-M2.7-highspeed,
MiniMax-M2.5, MiniMax-M2.5-highspeed

Usage: Set aide.openaiBaseUrl to "minimax@https://api.minimax.io/v1"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant