Author's Description
Qwen3-235B-A22B-Thinking-2507 is a high-performance, open-weight Mixture-of-Experts (MoE) language model optimized for complex reasoning tasks. It activates 22B of its 235B parameters per forward pass and natively supports up to 262,144 tokens of context. This "thinking-only" variant enhances structured logical reasoning, mathematics, science, and long-form generation, showing strong benchmark performance across AIME, SuperGPQA, LiveCodeBench, and MMLU-Redux. It enforces a special reasoning mode (</think>) and is designed for high-token outputs (up to 81,920 tokens) in challenging domains. The model is instruction-tuned and excels at step-by-step reasoning, tool use, agentic workflows, and multilingual tasks. This release represents the most capable open-source variant in the Qwen3-235B series, surpassing many closed models in structured reasoning use cases.
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Qwen3-235B-A22B-Thinking-2507 demonstrates a strong overall performance profile, particularly excelling in specialized reasoning and knowledge-based tasks. While its speed ranking places it among models with longer response times (14th percentile) and its price ranking indicates premium pricing levels (6th percentile), these are often justified by its exceptional capabilities. The model exhibits outstanding reliability, achieving a perfect 100th percentile, meaning it consistently provides usable responses with minimal technical failures. Across benchmarks, Qwen3-235B-A22B-Thinking-2507 shows remarkable accuracy in Coding (98.0%, 100th percentile) and General Knowledge (100.0%, perfect accuracy), often leading its price and speed categories in these domains. Its Reasoning performance is also strong at 90.0% accuracy (87th percentile). However, its Instruction Following accuracy is a notable weakness at 26.3% (25th percentile), suggesting challenges with complex multi-step directives despite its "thinking-only" design. Ethics performance is moderate at 98.0% (40th percentile), and Email Classification is solid at 99.0% (72nd percentile). The model's high cost and duration in the Instruction Following benchmark further highlight this area for improvement. Its core strength lies in structured logical reasoning and knowledge retrieval, making it highly suitable for demanding analytical applications.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.078 |
Completion | $0.312 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Alibaba
|
Alibaba | qwen/qwen3-235b-a22b-thinking-2507 | 131K | $0.7 / 1M tokens | $8.4 / 1M tokens |
Novita
|
Novita | qwen/qwen3-235b-a22b-thinking-2507 | 131K | $0.078 / 1M tokens | $0.312 / 1M tokens |
Chutes
|
Chutes | qwen/qwen3-235b-a22b-thinking-2507 | 262K | $0.078 / 1M tokens | $0.312 / 1M tokens |
Novita
|
Novita | qwen/qwen3-235b-a22b-thinking-2507 | 131K | $0.3 / 1M tokens | $3 / 1M tokens |
DeepInfra
|
DeepInfra | qwen/qwen3-235b-a22b-thinking-2507 | 262K | $0.13 / 1M tokens | $0.6 / 1M tokens |
Parasail
|
Parasail | qwen/qwen3-235b-a22b-thinking-2507 | 262K | $0.65 / 1M tokens | $3 / 1M tokens |
Together
|
Together | qwen/qwen3-235b-a22b-thinking-2507 | 262K | $0.65 / 1M tokens | $3 / 1M tokens |
Crusoe
|
Crusoe | qwen/qwen3-235b-a22b-thinking-2507 | 262K | $0.078 / 1M tokens | $0.312 / 1M tokens |
Cerebras
|
Cerebras | qwen/qwen3-235b-a22b-thinking-2507 | 131K | $0.6 / 1M tokens | $1.2 / 1M tokens |
GMICloud
|
GMICloud | qwen/qwen3-235b-a22b-thinking-2507 | 131K | $0.6 / 1M tokens | $3 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by qwen
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Qwen: Qwen3 30B A3B Instruct 2507 | Jul 29, 2025 | 30B | 131K |
Text input
Text output
|
★★★★ | ★★★★ | $$$ |
Qwen: Qwen3 Coder | Jul 22, 2025 | 480B | 1M |
Text input
Text output
|
★★★★ | ★★★ | $$$ |
Qwen: Qwen3 235B A22B Instruct 2507 | Jul 21, 2025 | 235B | 262K |
Text input
Text output
|
★ | ★★★ | $$$ |
Qwen: Qwen3 30B A3B | Apr 28, 2025 | 30B | 40K |
Text input
Text output
|
★ | ★★★★★ | $$$$ |
Qwen: Qwen3 8B | Apr 28, 2025 | 8B | 128K |
Text input
Text output
|
★ | ★★★ | $$$ |
Qwen: Qwen3 14B | Apr 28, 2025 | 14B | 40K |
Text input
Text output
|
★★ | ★★★★★ | $$$ |
Qwen: Qwen3 32B | Apr 28, 2025 | 32B | 40K |
Text input
Text output
|
★ | ★★★★★ | $$$ |
Qwen: Qwen3 235B A22B | Apr 28, 2025 | 235B | 40K |
Text input
Text output
|
★ | ★★★★ | $$$$ |
Qwen: Qwen2.5 VL 32B Instruct | Mar 24, 2025 | 32B | 128K |
Text input
Image input
Text output
|
★★ | ★★★ | $$$ |
Qwen: QwQ 32B | Mar 05, 2025 | 32B | 131K |
Text input
Text output
|
★ | ★★★ | $$$ |
Qwen: Qwen VL Plus | Feb 04, 2025 | — | 7K |
Text input
Image input
Text output
|
★★★★ | ★★ | $$$ |
Qwen: Qwen VL Max | Feb 01, 2025 | — | 7K |
Text input
Image input
Text output
|
★★★★★ | ★★★ | $$$$ |
Qwen: Qwen-Turbo | Feb 01, 2025 | — | 1M |
Text input
Text output
|
★★★★★ | ★★★★ | $$ |
Qwen: Qwen2.5 VL 72B Instruct | Feb 01, 2025 | 72B | 32K |
Text input
Image input
Text output
|
★★★★ | ★★★★ | $$ |
Qwen: Qwen-Plus | Feb 01, 2025 | — | 131K |
Text input
Text output
|
★★★ | ★★★★ | $$$ |
Qwen: Qwen-Max | Feb 01, 2025 | — | 32K |
Text input
Text output
|
★★★ | ★★★★ | $$$$ |
Qwen: QwQ 32B Preview | Nov 27, 2024 | 32B | 32K |
Text input
Text output
|
— | ★ | $$ |
Qwen2.5 Coder 32B Instruct | Nov 11, 2024 | ~500B | 32K |
Text input
Text output
|
★★★★★ | ★★★★★ | $ |
Qwen2.5 7B Instruct | Oct 15, 2024 | ~500B | 32K |
Text input
Text output
|
★ | ★★★ | $$ |
Qwen2.5 72B Instruct | Sep 18, 2024 | ~500B | 32K |
Text input
Text output
|
★★★ | ★★★ | $$$ |
Qwen: Qwen2.5-VL 7B Instruct | Aug 27, 2024 | ~500B | 32K |
Text input
Image input
Text output
|
★★★ | ★★★ | $$$ |
Qwen 2 72B Instruct | Jun 06, 2024 | ~500B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$$ |