Skip to content

Commit 64dc033

Browse files
octo-patchPR Bot
authored andcommitted
Add MiniMax as a supported LLM provider
MiniMax offers OpenAI-compatible API with models like MiniMax-M2.5 and MiniMax-M2.5-highspeed (204K context window). This commit adds: - MiniMax provider section in README (setup guide, cost estimation) - Example config file (configs/minimax_config.yaml) - Updated test env to support MiniMax API key validation Co-Authored-By: octo-patch <octo-patch@users.noreply.github.com>
1 parent 65cbbe8 commit 64dc033

File tree

4 files changed

+108
-3
lines changed

4 files changed

+108
-3
lines changed

README.md

Lines changed: 30 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -202,7 +202,7 @@ OpenEvolve implements a sophisticated **evolutionary coding pipeline** that goes
202202
<details>
203203
<summary><b>Advanced LLM Integration</b></summary>
204204

205-
- **Universal API**: Works with OpenAI, Google, local models, and proxies
205+
- **Universal API**: Works with OpenAI, Google, MiniMax, local models, and proxies
206206
- **Intelligent Ensembles**: Weighted combinations with sophisticated fallback
207207
- **Test-Time Compute**: Enhanced reasoning through proxy systems (see [OptiLLM setup](#llm-provider-setup))
208208
- **Plugin Ecosystem**: Support for advanced reasoning plugins
@@ -281,6 +281,7 @@ docker run --rm -v $(pwd):/app ghcr.io/algorithmicsuperintelligence/openevolve:l
281281
- **o3-mini**: ~$0.03-0.12 per iteration (more cost-effective)
282282
- **Gemini-2.5-Pro**: ~$0.08-0.30 per iteration
283283
- **Gemini-2.5-Flash**: ~$0.01-0.05 per iteration (fastest and cheapest)
284+
- **MiniMax-M2.5**: ~$0.02-0.08 per iteration (204K context, OpenAI-compatible)
284285
- **Local models**: Nearly free after setup
285286
- **OptiLLM**: Use cheaper models with test-time compute for better results
286287

@@ -320,6 +321,33 @@ export OPENAI_API_KEY="your-gemini-api-key"
320321

321322
</details>
322323

324+
<details>
325+
<summary><b>🧠 MiniMax</b></summary>
326+
327+
[MiniMax](https://www.minimaxi.com/) offers powerful models with 204K context window via an OpenAI-compatible API:
328+
329+
```yaml
330+
# config.yaml
331+
llm:
332+
api_base: "https://api.minimax.io/v1"
333+
api_key: "${MINIMAX_API_KEY}"
334+
models:
335+
- name: "MiniMax-M2.5"
336+
weight: 0.6
337+
- name: "MiniMax-M2.5-highspeed"
338+
weight: 0.4
339+
```
340+
341+
```bash
342+
export MINIMAX_API_KEY="your-minimax-api-key"
343+
```
344+
345+
> **Note:** MiniMax requires temperature to be in (0.0, 1.0] — zero is not accepted. The default 0.7 works well.
346+
347+
See [`configs/minimax_config.yaml`](configs/minimax_config.yaml) for a complete configuration example.
348+
349+
</details>
350+
323351
<details>
324352
<summary><b>🏠 Local Models (Ollama/vLLM)</b></summary>
325353

@@ -792,7 +820,7 @@ See the [Cost Estimation](#cost-estimation) section in Installation & Setup for
792820

793821
**Yes!** OpenEvolve supports any OpenAI-compatible API:
794822

795-
- **Commercial**: OpenAI, Google, Cohere
823+
- **Commercial**: OpenAI, Google, Cohere, MiniMax
796824
- **Local**: Ollama, vLLM, LM Studio, text-generation-webui
797825
- **Advanced**: OptiLLM for routing and test-time compute
798826

configs/README.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,9 @@ The main configuration file containing all available options with sensible defau
1212

1313
Use this file as a template for your own configurations.
1414

15+
### `minimax_config.yaml`
16+
A complete configuration for using [MiniMax](https://www.minimaxi.com/) models (MiniMax-M2.5, MiniMax-M2.5-highspeed) with OpenEvolve. MiniMax provides an OpenAI-compatible API with 204K context window support.
17+
1518
### `island_config_example.yaml`
1619
A practical example configuration demonstrating proper island-based evolution setup. Shows:
1720
- Recommended island settings for most use cases

configs/minimax_config.yaml

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# OpenEvolve Configuration for MiniMax
2+
# MiniMax provides OpenAI-compatible API with powerful models like MiniMax-M2.5
3+
# Get your API key from: https://platform.minimaxi.com/
4+
#
5+
# Set your API key:
6+
# export MINIMAX_API_KEY="your-minimax-api-key"
7+
8+
# General settings
9+
max_iterations: 100
10+
checkpoint_interval: 10
11+
log_level: "INFO"
12+
random_seed: 42
13+
14+
# LLM configuration for MiniMax
15+
llm:
16+
api_base: "https://api.minimax.io/v1"
17+
api_key: "${MINIMAX_API_KEY}"
18+
19+
# MiniMax models for evolution
20+
models:
21+
- name: "MiniMax-M2.5"
22+
weight: 0.6
23+
- name: "MiniMax-M2.5-highspeed"
24+
weight: 0.4
25+
26+
# MiniMax models for LLM feedback
27+
evaluator_models:
28+
- name: "MiniMax-M2.5-highspeed"
29+
weight: 1.0
30+
31+
# Generation parameters
32+
# Note: MiniMax requires temperature to be in (0.0, 1.0] — zero is not accepted
33+
temperature: 0.7
34+
top_p: 0.95
35+
max_tokens: 4096
36+
37+
# Request parameters
38+
timeout: 120
39+
retries: 3
40+
retry_delay: 5
41+
42+
# Evolution settings
43+
diff_based_evolution: true
44+
max_code_length: 10000
45+
46+
# Prompt configuration
47+
prompt:
48+
system_message: "You are an expert coder helping to improve programs through evolution."
49+
evaluator_system_message: "You are an expert code reviewer."
50+
num_top_programs: 3
51+
num_diverse_programs: 2
52+
use_template_stochasticity: true
53+
include_artifacts: true
54+
55+
# Database configuration
56+
database:
57+
population_size: 1000
58+
num_islands: 5
59+
migration_interval: 50
60+
migration_rate: 0.1
61+
feature_dimensions:
62+
- "complexity"
63+
- "diversity"
64+
feature_bins: 10
65+
66+
# Evaluator configuration
67+
evaluator:
68+
timeout: 300
69+
cascade_evaluation: true
70+
cascade_thresholds:
71+
- 0.5
72+
- 0.75
73+
- 0.9
74+
parallel_evaluations: 4

tests/test_valid_configs.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ def collect_files(self):
2424
config_files.append(os.path.join(root, file))
2525
return config_files
2626

27-
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key-for-validation"})
27+
@patch.dict(os.environ, {"ANTHROPIC_API_KEY": "test-key-for-validation", "MINIMAX_API_KEY": "test-key-for-validation"})
2828
def test_import_config_files(self):
2929
"""Attempt to import all config files"""
3030
config_files = self.collect_files()

0 commit comments

Comments
 (0)