Author's Description
May 28th update to the [original DeepSeek R1](/deepseek/deepseek-r1) Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model.
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
DeepSeek: R1 0528, the May 28th update to the original DeepSeek R1, demonstrates strong performance across various benchmarks, positioning it as a competitive open-source alternative to models like OpenAI o1. In terms of speed, DeepSeek: R1 0528 tends to exhibit longer response times, ranking in the 5th percentile across five benchmarks. Similarly, its pricing tends to be at premium levels, placing it in the 16th percentile. Despite these considerations, the model showcases impressive capabilities. It excels in complex tasks, achieving 93.0% accuracy in Coding (91st percentile) and 98.0% in Reasoning (94th percentile), indicating a strong aptitude for logical and programmatic challenges. Its General Knowledge is also robust at 99.5% accuracy (80th percentile). While its Ethics performance is solid at 99.0% (55th percentile) and Email Classification at 98.0% (59th percentile), these are closer to the median. Overall, DeepSeek: R1 0528's key strengths lie in its high accuracy across demanding cognitive tasks, particularly in coding, reasoning, and general knowledge. Its primary weaknesses are its relatively slower response times and premium pricing. This model is a compelling option for applications prioritizing high accuracy and advanced problem-solving, especially where the benefits of an open-source, transparent reasoning model outweigh speed and cost considerations.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.272 |
Completion | $0.272 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
InferenceNet
|
InferenceNet | deepseek/deepseek-r1-0528 | 128K | $0.272 / 1M tokens | $0.272 / 1M tokens |
DeepInfra
|
DeepInfra | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Lambda
|
Lambda | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Novita
|
Novita | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Parasail
|
Parasail | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
GMICloud
|
GMICloud | deepseek/deepseek-r1-0528 | 131K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Nebius
|
Nebius | deepseek/deepseek-r1-0528 | 131K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Enfer
|
Enfer | deepseek/deepseek-r1-0528 | 32K | $0.272 / 1M tokens | $0.272 / 1M tokens |
BaseTen
|
BaseTen | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Kluster
|
Kluster | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Together
|
Together | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Fireworks
|
Fireworks | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
SambaNova
|
SambaNova | deepseek/deepseek-r1-0528 | 32K | $0.272 / 1M tokens | $0.272 / 1M tokens |
DeepSeek
|
DeepSeek | deepseek/deepseek-r1-0528 | 64K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Enfer
|
Enfer | deepseek/deepseek-r1-0528 | 32K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Cent-ML
|
Cent-ML | deepseek/deepseek-r1-0528 | 131K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Crusoe
|
Crusoe | deepseek/deepseek-r1-0528 | 131K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Crusoe
|
Crusoe | deepseek/deepseek-r1-0528 | 131K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Targon
|
Targon | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Chutes
|
Chutes | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Google
|
Google | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Friendli
|
Friendli | deepseek/deepseek-r1-0528 | 163K | $0.272 / 1M tokens | $0.272 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by deepseek
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
DeepSeek: R1 Distill Qwen 7B | May 30, 2025 | 7B | 131K |
Text input
Text output
|
★ | ★ | $$$$ |
DeepSeek: Deepseek R1 0528 Qwen3 8B | May 29, 2025 | 8B | 131K |
Text input
Text output
|
★ | ★★★★★ | $$$ |
DeepSeek: DeepSeek Prover V2 | Apr 30, 2025 | ~671B | 131K |
Text input
Text output
|
★★★★ | ★★★★★ | $$$$ |
DeepSeek: DeepSeek V3 0324 | Mar 24, 2025 | ~685B | 163K |
Text input
Text output
|
★★★ | ★★★★★ | $$$ |
DeepSeek: R1 Distill Llama 8B | Feb 07, 2025 | 8B | 32K |
Text input
Text output
|
★ | ★★★ | $$ |
DeepSeek: R1 Distill Qwen 1.5B | Jan 31, 2025 | 5B | 131K |
Text input
Text output
|
★★★ | ★ | $$$ |
DeepSeek: R1 Distill Qwen 32B | Jan 29, 2025 | 32B | 131K |
Text input
Text output
|
★ | ★★★★★ | $$$ |
DeepSeek: R1 Distill Qwen 14B | Jan 29, 2025 | 14B | 64K |
Text input
Text output
|
★ | ★★★ | $$$ |
DeepSeek: R1 Distill Llama 70B | Jan 23, 2025 | 70B | 131K |
Text input
Text output
|
★ | ★★★★★ | $$$$ |
DeepSeek: R1 | Jan 20, 2025 | ~671B | 128K |
Text input
Text output
|
★★ | ★★★★ | $$$$ |
DeepSeek: DeepSeek V3 | Dec 26, 2024 | — | 163K |
Text input
Text output
|
★★★ | ★★★★ | $$$ |