Author's Description
DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active) that supports both thinking and non-thinking modes via prompt templates. It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference. Users can control the reasoning behaviour with the `reasoning` `enabled` boolean. [Learn more in our docs](https://openrouter.ai/docs/use-cases/reasoning-tokens#enable-reasoning-with-default-config) The model improves tool use, code generation, and reasoning efficiency, achieving performance comparable to DeepSeek-R1 on difficult benchmarks while responding more quickly. It supports structured tool calling, code agents, and search agents, making it suitable for research, coding, and agentic workflows. It succeeds the [DeepSeek V3-0324](/deepseek/deepseek-chat-v3-0324) model and performs well on a variety of tasks.
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
DeepSeek-V3.1, a 671B parameter hybrid reasoning model, demonstrates a balanced performance profile with notable strengths in reliability and accuracy across various tasks. While its speed performance is moderate, ranking in the 27th percentile, it offers competitive pricing, placing in the 58th percentile. A standout feature is its exceptional reliability, achieving a 100% success rate across all benchmarks, indicating consistent and dependable operation. In terms of accuracy, DeepSeek-V3.1 excels in several categories. It achieved perfect accuracy in the Ethics (Baseline) benchmark, making it the most accurate model at its price point and among models of comparable speed. It also showed strong performance in General Knowledge (99.5% accuracy, 83rd percentile) and Email Classification (98.0% accuracy, 66th percentile). Its instruction following capabilities are robust (70.0% accuracy, 82nd percentile), and it performs well in Coding (89.0% accuracy, 75th percentile) and Reasoning (78.0% accuracy, 75th percentile). The model's design, incorporating a two-phase long-context training process and FP8 microscaling, contributes to its efficiency and broad applicability in research, coding, and agentic workflows.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.27 |
Completion | $1.1 |
Input Cache Read | $0.07 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
DeepSeek
|
DeepSeek | deepseek/deepseek-chat-v3.1 | 131K | $0.27 / 1M tokens | $1.1 / 1M tokens |
Chutes
|
Chutes | deepseek/deepseek-chat-v3.1 | 163K | $0.2 / 1M tokens | $0.8 / 1M tokens |
AtlasCloud
|
AtlasCloud | deepseek/deepseek-chat-v3.1 | 65K | $0.45 / 1M tokens | $1.5 / 1M tokens |
Novita
|
Novita | deepseek/deepseek-chat-v3.1 | 163K | $0.27 / 1M tokens | $1 / 1M tokens |
GMICloud
|
GMICloud | deepseek/deepseek-chat-v3.1 | 163K | $0.45 / 1M tokens | $1.5 / 1M tokens |
Fireworks
|
Fireworks | deepseek/deepseek-chat-v3.1 | 163K | $0.56 / 1M tokens | $1.68 / 1M tokens |
Parasail
|
Parasail | deepseek/deepseek-chat-v3.1 | 163K | $0.2 / 1M tokens | $0.8 / 1M tokens |
Parasail
|
Parasail | deepseek/deepseek-chat-v3.1 | 163K | $0.64 / 1M tokens | $1.65 / 1M tokens |
DeepInfra
|
DeepInfra | deepseek/deepseek-chat-v3.1 | 163K | $0.2 / 1M tokens | $0.8 / 1M tokens |
DeepInfra
|
DeepInfra | deepseek/deepseek-chat-v3.1 | 163K | $0.27 / 1M tokens | $1 / 1M tokens |
SambaNova
|
SambaNova | deepseek/deepseek-chat-v3.1 | 32K | $3 / 1M tokens | $4.5 / 1M tokens |
SiliconFlow
|
SiliconFlow | deepseek/deepseek-chat-v3.1 | 163K | $0.27 / 1M tokens | $1.1 / 1M tokens |
WandB
|
WandB | deepseek/deepseek-chat-v3.1 | 128K | $0.55 / 1M tokens | $1.65 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by deepseek
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
DeepSeek: DeepSeek V3.1 Base | Aug 20, 2025 | ~671B | 163K |
Text input
Text output
|
★★ | ★ | $$ |
DeepSeek: R1 Distill Qwen 7B Unavailable | May 30, 2025 | 7B | 131K |
Text input
Text output
|
★ | ★ | $$$$ |
DeepSeek: Deepseek R1 0528 Qwen3 8B | May 29, 2025 | 8B | 131K |
Text input
Text output
|
★★★ | ★★★ | $$ |
DeepSeek: R1 0528 | May 28, 2025 | ~671B | 128K |
Text input
Text output
|
★★★ | ★★★ | $$$ |
DeepSeek: DeepSeek Prover V2 | Apr 30, 2025 | ~671B | 131K |
Text input
Text output
|
★★ | ★★★★★ | $$$$ |
DeepSeek: DeepSeek V3 Base Unavailable | Mar 29, 2025 | ~671B | 163K |
Text input
Text output
|
★ | ★ | $$$ |
DeepSeek: DeepSeek V3 0324 | Mar 24, 2025 | ~685B | 163K |
Text input
Text output
|
★★★★ | ★★★★★ | $$ |
DeepSeek: R1 Distill Llama 8B | Feb 07, 2025 | 8B | 32K |
Text input
Text output
|
★ | ★★ | $$ |
DeepSeek: R1 Distill Qwen 1.5B Unavailable | Jan 31, 2025 | 5B | 131K |
Text input
Text output
|
★★★ | ★ | $$$ |
DeepSeek: R1 Distill Qwen 32B | Jan 29, 2025 | 32B | 131K |
Text input
Text output
|
★ | ★★★★★ | $$$ |
DeepSeek: R1 Distill Qwen 14B | Jan 29, 2025 | 14B | 64K |
Text input
Text output
|
★ | ★★ | $$$ |
DeepSeek: R1 Distill Llama 70B | Jan 23, 2025 | 70B | 131K |
Text input
Text output
|
★★★ | ★★★★★ | $$ |
DeepSeek: R1 | Jan 20, 2025 | ~671B | 128K |
Text input
Text output
|
★★★ | ★★★★★ | $$$ |
DeepSeek: DeepSeek V3 | Dec 26, 2024 | — | 163K |
Text input
Text output
|
★★★ | ★★★★★ | $$$ |