Skip to content

feat: add MiniMax Token Plan provider with hardcoded model list#7609

Merged
Soulter merged 6 commits intoAstrBotDevs:masterfrom
Blueteemo:feat/add-minimax-token-plan-provider
Apr 17, 2026
Merged

feat: add MiniMax Token Plan provider with hardcoded model list#7609
Soulter merged 6 commits intoAstrBotDevs:masterfrom
Blueteemo:feat/add-minimax-token-plan-provider

Conversation

@Blueteemo
Copy link
Copy Markdown
Contributor

@Blueteemo Blueteemo commented Apr 16, 2026

背景

本次 PR 解决 issue #7585 中反馈的 MiniMax Token Plan 用户无法正常使用 AstrBot 的问题。

调研发现:

  • MiniMax Token Plan API 不提供 /models 接口,无法动态获取模型列表
  • 现有的 anthropic_chat_completion provider 依赖 /models 接口,导致 Token Plan 用户无法获取模型列表
  • 这是 Token Plan API 本身的限制,与 AstrBot 实现无关

解决方案

新增专门的 minimax_token_plan provider:

  • 继承 ProviderAnthropic,复用所有聊天/补全/工具调用逻辑
  • api_base 写死为 https://api.minimaxi.com/anthropic
  • 通过 custom_headers 注入 Authorization: Bearer <token> header(Token Plan 要求此认证方式)
  • get_models() 返回硬编码的模型列表,避免调用 /models 接口
  • 模型列表:MiniMax-M2.7MiniMax-M2.5MiniMax-M2.1MiniMax-M2
  • 极速版(highspeed)模型需高级套餐,已排除
  • 初始化时校验用户配置的模型名称,不在列表内则抛出明确的 ValueError

模型列表来源:MiniMax 官方 Anthropic 兼容 API 文档

后续展望

如果 MiniMax 官方后续支持了 /models 接口,只需将 get_models() 改为动态调用即可,无需修改其他逻辑。


Summary

Add a new provider minimax_token_plan for MiniMax Token Plan users, dedicated to MiniMax's Token Plan subscription API.

Problem: When using MiniMax Token Plan API key with the existing anthropic_chat_completion provider, users cannot get model list because the Token Plan API does not expose a /models endpoint. See issue #7585.

Solution:

  • Create a new provider minimax_token_plan that inherits from ProviderAnthropic
  • api_base hardcoded to https://api.minimaxi.com/anthropic
  • Use custom_headers to inject Authorization: Bearer <token> header (required by Token Plan)
  • get_models() returns hardcoded model list: MiniMax-M2.7, M2.5, M2.1, M2 (from official docs)
  • Highspeed models excluded (require premium tier)
  • Model name validation at initialization to catch unsupported models early

Future: If MiniMax adds a /models endpoint in the future, only get_models() needs to be updated.

Closes #7585


Summary by Sourcery

Add a dedicated MiniMax Token Plan provider that reuses the Anthropic adapter while handling Token Plan–specific configuration and model discovery.

New Features:

  • Introduce a new minimax_token_plan provider adapter for MiniMax Token Plan subscriptions with a fixed API base and auth header requirements.

Enhancements:

  • Provide a hardcoded MiniMax Token Plan model list in the provider to support model selection without relying on a /models endpoint.

…strBotDevs#7585)

- Add new provider 'minimax_token_plan' for MiniMax Token Plan users
- Inherit ProviderAnthropic to reuse all chat/completion logic
- Hardcode api_base to https://api.minimaxi.com/anthropic
- get_models() returns hardcoded list: MiniMax-M2.7, M2.5, M2.1, M2
- Highspeed models excluded (require premium tier)
- Reason for hardcoding: Token Plan API does not expose /models endpoint
- Fixes: AstrBotDevs#7585
@auto-assign auto-assign bot requested review from Soulter and advent259141 April 16, 2026 20:31
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 16, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 2 issues, and left some high level feedback:

  • Since api_base is intentionally fixed in __init__, consider omitting it from default_config_tmpl or clearly treating it as read-only to avoid confusing users who might think they can override it.
  • Instead of mutating the incoming provider_config dict in-place when setting api_base and auth_header, consider copying it first (e.g., provider_config = {**provider_config, "api_base": ..., "auth_header": True}) to avoid unexpected side effects on shared configuration objects.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Since `api_base` is intentionally fixed in `__init__`, consider omitting it from `default_config_tmpl` or clearly treating it as read-only to avoid confusing users who might think they can override it.
- Instead of mutating the incoming `provider_config` dict in-place when setting `api_base` and `auth_header`, consider copying it first (e.g., `provider_config = {**provider_config, "api_base": ..., "auth_header": True}`) to avoid unexpected side effects on shared configuration objects.

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/minimax_token_plan_source.py" line_range="13-21" />
<code_context>
+@register_provider_adapter(
+    "minimax_token_plan",
+    "MiniMax Token Plan 提供商适配器",
+    default_config_tmpl={
+        "key": "",
+        "api_base": "https://api.minimaxi.com/anthropic",
+    },
+    provider_display_name="MiniMax Token Plan",
</code_context>
<issue_to_address>
**suggestion:** Configuration template exposes `api_base` even though it is always overridden.

Because `__init__` always overwrites `provider_config["api_base"]` with the MiniMax endpoint, exposing `api_base` here implies it’s user-configurable when it isn’t. Please either remove it from `default_config_tmpl` or mark it clearly as internal-only to avoid a misleading no-op config option.

```suggestion
@register_provider_adapter(
    "minimax_token_plan",
    "MiniMax Token Plan 提供商适配器",
    default_config_tmpl={
        "key": "",
    },
    provider_display_name="MiniMax Token Plan",
)
```
</issue_to_address>

### Comment 2
<location path="astrbot/core/provider/sources/minimax_token_plan_source.py" line_range="39-44" />
<code_context>
+            provider_settings,
+        )
+
+        self.set_model(provider_config.get("model", "MiniMax-M2.7"))
+
+    async def get_models(self) -> list[str]:
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Model selection is not validated against the known Token Plan model list.

Given `MINIMAX_TOKEN_PLAN_MODELS` is the source of truth for supported models, consider validating `provider_config["model"]` against this list before calling `set_model`. Either normalize to a supported value or raise a clear error for unsupported models to avoid confusing downstream failures and to keep behavior consistent with `get_models()`.

```suggestion
        super().__init__(
            provider_config,
            provider_settings,
        )

        # Validate configured model against the supported Token Plan model list
        configured_model = provider_config.get("model", "MiniMax-M2.7")
        if configured_model not in MINIMAX_TOKEN_PLAN_MODELS:
            raise ValueError(
                f"Unsupported MiniMax Token Plan model: {configured_model!r}. "
                f"Supported models: {', '.join(MINIMAX_TOKEN_PLAN_MODELS)}"
            )

        self.set_model(configured_model)
```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/core/provider/sources/minimax_token_plan_source.py
Comment thread astrbot/core/provider/sources/minimax_token_plan_source.py Outdated
- Remove api_base from default_config_tmpl (always overridden, misleading)
- Add model validation against MINIMAX_TOKEN_PLAN_MODELS
- Raise clear ValueError if user configures an unsupported model

Addressed Sourcery AI review comments.
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces the MiniMax Token Plan provider adapter, extending the Anthropic provider with a fixed API base and a hardcoded model list. A review comment points out that the auth_header configuration is likely ineffective and suggests using custom_headers to correctly provide the Authorization token.

Comment thread astrbot/core/provider/sources/minimax_token_plan_source.py Outdated
Blueteemo and others added 4 commits April 17, 2026 04:44
MiniMax Token Plan requires Authorization: Bearer <token> header.
Use custom_headers to inject the correct auth header instead of
the non-functional auth_header key.

Addressed Gemini Code Assist review comment.
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Apr 17, 2026
@Soulter Soulter changed the title feat: add MiniMax Token Plan provider with hardcoded model list (fix #7585) feat: add MiniMax Token Plan provider with hardcoded model list Apr 17, 2026
@Soulter Soulter merged commit 0ca6ba9 into AstrBotDevs:master Apr 17, 2026
21 checks passed
@Blueteemo Blueteemo deleted the feat/add-minimax-token-plan-provider branch April 17, 2026 11:36
Aster-amellus pushed a commit to Aster-amellus/AstrBot that referenced this pull request Apr 18, 2026
…BotDevs#7609)

* feat: add MiniMax Token Plan provider with hardcoded model list (fix AstrBotDevs#7585)

- Add new provider 'minimax_token_plan' for MiniMax Token Plan users
- Inherit ProviderAnthropic to reuse all chat/completion logic
- Hardcode api_base to https://api.minimaxi.com/anthropic
- get_models() returns hardcoded list: MiniMax-M2.7, M2.5, M2.1, M2
- Highspeed models excluded (require premium tier)
- Reason for hardcoding: Token Plan API does not expose /models endpoint
- Fixes: AstrBotDevs#7585

* fix: remove api_base from config template and add model validation

- Remove api_base from default_config_tmpl (always overridden, misleading)
- Add model validation against MINIMAX_TOKEN_PLAN_MODELS
- Raise clear ValueError if user configures an unsupported model

Addressed Sourcery AI review comments.

* fix: use custom_headers for Bearer token auth instead of auth_header

MiniMax Token Plan requires Authorization: Bearer <token> header.
Use custom_headers to inject the correct auth header instead of
the non-functional auth_header key.

Addressed Gemini Code Assist review comment.

* fix: update MiniMax Token Plan provider adapter and documentation to English

* feat: add MiniMax Token Plan configuration and icon support

* feat: remove default configuration template from MiniMax Token Plan provider adapter

---------

Co-authored-by: Soulter <905617992@qq.com>
@y524331729
Copy link
Copy Markdown

我是购买了 MiniMax Plus-极速版 的用户,当前硬编码的限制导致我无法配置 MiniMax-M2.7-highspeed 并抛出 ValueError。建议将白名单校验改为 Warning 警告,或者把 highspeed 模型加回列表,不要做强行阻断。

@Blueteemo
Copy link
Copy Markdown
Contributor Author

我是购买了 MiniMax Plus-极速版 的用户,当前硬编码的限制导致我无法配置 MiniMax-M2.7-highspeed 并抛出 ValueError。建议将白名单校验改为 Warning 警告,或者把 highspeed 模型加回列表,不要做强行阻断。

感谢反馈!新 PR #7692 已修复此问题:

  • 将硬编码模型列表扩展为完整列表(含所有 highspeed 变体)
  • ValueError 改为 logger.warning,不再强行阻断

在新 PR 合并之前,可参考 issue #7585 的临时方案先做配置。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. lgtm This PR has been approved by a maintainer size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature Request] MiniMax Token Plan 订阅模式支持

3 participants