Skip to content

feat: add MiniMax as LLM judge provider#1263

Open
octo-patch wants to merge 1 commit intoEvolvingLMMs-Lab:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax as LLM judge provider#1263
octo-patch wants to merge 1 commit intoEvolvingLMMs-Lab:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

Summary

  • Add MiniMax as a first-class LLM judge provider (minimax and async_minimax api_type), enabling evaluation tasks to leverage MiniMax models (M2.7, M2.5, M2.5-highspeed) via the existing ProviderFactory
  • Both sync (MiniMaxProvider) and async (AsyncMiniMaxProvider) implementations following the same patterns as the existing OpenAI providers
  • Temperature clamping to MiniMax's accepted [0.0, 1.0] range and automatic <think>...</think> tag stripping for reasoning models

Files changed (5 files, 820 additions)

File Description
lmms_eval/llm_judge/providers/minimax.py Sync MiniMax provider extending ServerInterface
lmms_eval/llm_judge/providers/async_minimax.py Async MiniMax provider extending AsyncServerInterface
lmms_eval/llm_judge/providers/__init__.py Export new providers
lmms_eval/llm_judge/factory.py Register minimax and async_minimax in ProviderFactory
test/eval/test_minimax_provider.py 31 unit tests + 3 integration tests

Usage

export API_TYPE=minimax
export MINIMAX_API_KEY=your-key

Or programmatically:

from lmms_eval.llm_judge import ProviderFactory, ServerConfig

provider = ProviderFactory.create_provider(
    api_type="minimax",
    config=ServerConfig(model_name="MiniMax-M2.7")
)

Test plan

  • 31 unit tests covering temperature clamping, think-tag stripping, provider init, evaluate, retries, JSON format, fallback requests, factory registration
  • 3 integration tests against live MiniMax API (skipped without MINIMAX_API_KEY)
  • All existing tests continue to pass

Add MiniMax (https://www.minimax.io) as a first-class LLM judge provider,
enabling evaluation tasks to use MiniMax models (M2.7, M2.5, M2.5-highspeed)
via the existing ProviderFactory.

Changes:
- MiniMaxProvider (sync) and AsyncMiniMaxProvider extending ServerInterface
  and AsyncServerInterface respectively
- OpenAI-compatible API via base_url routing to api.minimax.io/v1
- Temperature clamping to MiniMax's [0.0, 1.0] range
- Automatic <think>...</think> tag stripping for reasoning models
- Factory registration: 'minimax' and 'async_minimax' api_type values
- 31 unit tests + 3 integration tests

Usage:
  export API_TYPE=minimax
  export MINIMAX_API_KEY=your-key
@Luodian
Copy link
Copy Markdown
Contributor

Luodian commented Mar 26, 2026

why cant we use openai format api to request minimax model?

Comment on lines +55 to +66
try:
from openai import OpenAI

self.client = OpenAI(
api_key=self.api_key,
base_url=self.MINIMAX_BASE_URL,
)
self.use_client = True
except ImportError:
eval_logger.warning(
"OpenAI client not available, falling back to requests for MiniMax"
)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using OpenAI as the base request caller. I agree that this could be similar as openai provider

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants