Skip to content

feat: add MiniMax M2.5 & M2.7 as BAML LLM provider clients#91

Open
octo-patch wants to merge 2 commits intohumanlayer:mainfrom
octo-patch:feature/add-minimax-provider
Open

feat: add MiniMax M2.5 & M2.7 as BAML LLM provider clients#91
octo-patch wants to merge 2 commits intohumanlayer:mainfrom
octo-patch:feature/add-minimax-provider

Conversation

@octo-patch
Copy link
Copy Markdown

@octo-patch octo-patch commented Mar 17, 2026

Summary

  • Add MiniMax as a first-class LLM provider using BAML client definitions via OpenAI-compatible API
  • MiniMax-M2.7 and MiniMax-M2.7-highspeed are the latest recommended models (set as defaults in round-robin/fallback strategies)
  • MiniMax-M2.5 and MiniMax-M2.5-highspeed remain available for backward compatibility
  • All clients use MINIMAX_API_KEY env var and https://api.minimax.io/v1 base URL

Changes (6 files, ~370 additions)

  • packages/create-12-factor-agent/template/baml_src/clients.baml — Add MiniMaxM27, MiniMaxM27Highspeed, MiniMaxM25, MiniMaxM25Highspeed client blocks; update CustomFast and OpenaiFallback strategies
  • workshops/2025-05/final/baml_src/clients.baml — Same updates for workshop template
  • packages/create-12-factor-agent/template/test/minimax-provider.test.ts — 23 unit tests validating all MiniMax client config
  • packages/create-12-factor-agent/template/test/minimax-integration.test.ts — 10 integration tests (5 for M2.7, 5 for M2.5) verifying live API responses
  • packages/create-12-factor-agent/template/baml_src/agent.baml — Add MiniMax-targeted test cases
  • packages/create-12-factor-agent/template/README.md — Document MiniMax client usage

Test plan

  • 23 unit tests pass (minimax-provider.test.ts)
  • 10 integration tests pass against live MiniMax API (minimax-integration.test.ts)
  • All M2.7 and M2.5 models verified working
  • Existing OpenAI and Anthropic clients unaffected

Add MiniMax M2.5 and M2.5-highspeed as pre-configured LLM provider
clients using BAML's OpenAI-compatible provider with custom base_url.
MiniMax offers a 204K context window and an OpenAI-compatible API.

Changes:
- Add MiniMaxM25 and MiniMaxM25Highspeed client definitions in
  clients.baml (template and workshop final)
- Include MiniMax in round-robin (CustomFast) and fallback
  (OpenaiFallback) strategies
- Add BAML test cases for MiniMax provider
- Add unit tests (14 tests) validating BAML configuration
- Add integration tests (5 tests) verifying MiniMax API compatibility
- Update README with MiniMax setup instructions and env var
@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Mar 17, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
1 out of 2 committers have signed the CLA.

✅ octo-patch
❌ PR Bot


PR Bot seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You have signed the CLA already but the status is still pending? Let us recheck it.

Add MiniMax-M2.7 and MiniMax-M2.7-highspeed as the latest recommended
models, replacing M2.5 as the default in round-robin and fallback
strategies. M2.5 clients remain available for backward compatibility.

- Add MiniMaxM27 and MiniMaxM27Highspeed BAML client definitions
- Update CustomFast round-robin and OpenaiFallback to use M2.7
- Add 9 unit tests and 5 integration tests for M2.7 models
- Update README with M2.7 client documentation
@octo-patch octo-patch changed the title feat: add MiniMax as LLM provider in BAML client configuration feat: add MiniMax M2.5 & M2.7 as BAML LLM provider clients Mar 18, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants