MiniMax: MiniMax M1 (extended)

Text input Text output
Description

MiniMax-M1 is a large-scale, open-weight reasoning model designed for extended context and high-efficiency inference. It leverages a hybrid Mixture-of-Experts (MoE) architecture paired with a custom "lightning attention" mechanism, allowing it to process long sequences—up to 1 million tokens—while maintaining competitive FLOP efficiency. With 456 billion total parameters and 45.9B active per token, this variant is optimized for complex, multi-step reasoning tasks. Trained via a custom reinforcement learning pipeline (CISPO), M1 excels in long-context understanding, software engineering, agentic tool use, and mathematical reasoning. Benchmarks show strong performance across FullStackBench, SWE-bench, MATH, GPQA, and TAU-Bench, often outperforming other open models like DeepSeek R1 and Qwen3-235B.

Key Specifications
Context Length

128K

Parameters

Unknown

Created

Jun 17, 2025

Supported Parameters

This model supports the following parameters:

Reasoning Min P Structured Outputs Temperature Tools Tool Choice Include Reasoning Presence Penalty Max Tokens Top P Seed Stop Logit Bias Frequency Penalty
Features

This model supports the following features:

Structured Outputs Reasoning Tools
Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0
Completion $0

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Novita
Novita | minimax/minimax-m1:extended 128K $0 / 1M tokens $0 / 1M tokens
Chutes
Chutes | minimax/minimax-m1:extended 512K $0 / 1M tokens $0 / 1M tokens
Benchmark Performance Summary
Benchmark Category Reasoning Free Executions Accuracy Cost Duration