Author's Description
MiniMax-M2.1 is a lightweight, state-of-the-art large language model optimized for coding, agentic workflows, and modern application development. With only 10 billion activated parameters, it delivers a major jump in real-world capability while maintaining exceptional latency, scalability, and cost efficiency. Compared to its predecessor, M2.1 delivers cleaner, more concise outputs and faster perceived response times. It shows leading multilingual coding performance across major systems and application languages, achieving 49.4% on Multi-SWE-Bench and 72.5% on SWE-Bench Multilingual, and serves as a versatile agent “brain” for IDEs, coding tools, and general-purpose assistance. To avoid degrading this model's performance, MiniMax highly recommends preserving reasoning between turns. Learn more about using reasoning_details to pass back reasoning in our [docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#preserving-reasoning-blocks).
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
MiniMax-M2.1, a lightweight large language model, demonstrates a strong performance profile, particularly in specialized areas. While its speed and pricing are moderate, ranking in the 22nd and 24th percentiles respectively, it exhibits exceptional reliability with a 99% success rate, indicating consistent and dependable operation. The model excels in coding, achieving a remarkable 95.0% accuracy on the Coding (Baseline) benchmark, placing it in the 94th percentile. This is further supported by its leading multilingual coding performance (49.4% on Multi-SWE-Bench and 72.5% on SWE-Bench Multilingual). It also shows strong capabilities in General Knowledge (99.5% accuracy, 75th percentile), Email Classification (99.0% accuracy, 81st percentile), and Reasoning (94.0% accuracy, 84th percentile). Its ability to handle complex reasoning tasks and classify information accurately are notable strengths. However, M2.1 shows some areas for improvement. Its hallucination rate, while not severe, is 90.0% accuracy (40th percentile), suggesting occasional difficulty in acknowledging uncertainty. Instruction Following is also an average performer at 54.0% accuracy (54th percentile). Despite its overall strong performance, the model's duration for Mathematics is notably high, placing it in the 10th percentile for speed on that specific benchmark. Its recommended use of reasoning preservation between turns highlights a potential nuance in optimizing its performance.
Model Pricing
Current Pricing
| Feature | Price (per 1M tokens) |
|---|---|
| Prompt | $0.3 |
| Completion | $1.2 |
| Input Cache Read | $0.03 |
| Input Cache Write | $0.375 |
Price History
Available Endpoints
| Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
|---|---|---|---|---|
|
Minimax
|
Minimax | minimax/minimax-m2.1 | 204K | $0.3 / 1M tokens | $1.2 / 1M tokens |
|
Minimax
|
Minimax | minimax/minimax-m2.1 | 204K | $0.3 / 1M tokens | $2.4 / 1M tokens |
Benchmark Results
| Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
|---|
Other Models by minimax
|
|
Released | Params | Context |
|
Speed | Ability | Cost |
|---|---|---|---|---|---|---|---|
| MiniMax: MiniMax M2 | Oct 23, 2025 | ~230B | 196K |
Text input
Text output
|
★ | ★★★ | $$$$$ |
| MiniMax: MiniMax M1 | Jun 17, 2025 | — | 1M |
Text input
Text output
|
★ | ★★★★ | $$$$$ |
| MiniMax: MiniMax M1 (extended) Unavailable | Jun 17, 2025 | — | 128K |
Text input
Text output
|
★ | ★ | $$$$ |
| MiniMax: MiniMax-01 | Jan 14, 2025 | ~456B | 1M |
Text input
Image input
Text output
|
★★★ | ★★ | $$$ |