DeepSeek: R1

Text input Text output Free Option
Author's Description

DeepSeek R1 is here: Performance on par with [OpenAI o1](/openai/o1), but open-sourced and with fully open reasoning tokens. It's 671B parameters in size, with 37B active in an inference pass. Fully open-source model & [technical report](https://api-docs.deepseek.com/news/news250120). MIT licensed: Distill & commercialize freely!

Key Specifications
Cost
$$$
Context
128K
Parameters
671B (Rumoured)
Released
Jan 20, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Top Logprobs Reasoning Include Reasoning Logit Bias Stop Top P Seed Min P Frequency Penalty Response Format Structured Outputs Max Tokens Presence Penalty Temperature
Features

This model supports the following features:

Response Format Reasoning Structured Outputs
Performance Summary

DeepSeek R1, a 671B parameter open-source model with 37B active in inference, demonstrates competitive performance across various benchmarks. It exhibits competitive response times, ranking in the 40th percentile for speed, and offers moderate pricing, placing in the 31st percentile. A standout feature is its exceptional reliability, achieving a 100% success rate across all 7 benchmarks, indicating consistent and dependable operation. In terms of specific performance, DeepSeek R1 excels in Instruction Following, achieving perfect accuracy in one instance and high accuracy (80%) in another, often being among the fastest or most accurate models in these tests. It also shows strong capabilities in Email Classification with perfect accuracy and favorable cost/speed metrics. Coding performance is robust at 93% accuracy. While its Ethics performance is moderate at 96% accuracy, its Reasoning (70%) and General Knowledge (96.5%) scores are respectable. The model's key strengths lie in its instruction following, classification, and coding abilities, coupled with its unparalleled reliability. Its primary area for potential improvement appears to be in the Ethics benchmark, where its accuracy percentile is lower compared to other categories.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.4
Completion $2

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
InferenceNet
InferenceNet | deepseek/deepseek-r1 128K $0.4 / 1M tokens $2 / 1M tokens
DeepInfra
DeepInfra | deepseek/deepseek-r1 163K $0.7 / 1M tokens $2.4 / 1M tokens
Lambda
Lambda | deepseek/deepseek-r1 163K $0.4 / 1M tokens $2 / 1M tokens
Novita
Novita | deepseek/deepseek-r1 64K $0.7 / 1M tokens $2.5 / 1M tokens
Nebius
Nebius | deepseek/deepseek-r1 163K $0.8 / 1M tokens $2.4 / 1M tokens
DeepInfra
DeepInfra | deepseek/deepseek-r1 40K $1 / 1M tokens $3 / 1M tokens
Kluster
Kluster | deepseek/deepseek-r1 163K $0.4 / 1M tokens $2 / 1M tokens
Cent-ML
Cent-ML | deepseek/deepseek-r1 131K $0.4 / 1M tokens $2 / 1M tokens
Nebius
Nebius | deepseek/deepseek-r1 163K $2 / 1M tokens $6 / 1M tokens
Friendli
Friendli | deepseek/deepseek-r1 163K $0.4 / 1M tokens $2 / 1M tokens
Fireworks
Fireworks | deepseek/deepseek-r1 163K $3 / 1M tokens $8 / 1M tokens
Minimax
Minimax | deepseek/deepseek-r1 64K $0.55 / 1M tokens $2.19 / 1M tokens
Azure
Azure | deepseek/deepseek-r1 163K $1.49 / 1M tokens $5.94 / 1M tokens
Targon
Targon | deepseek/deepseek-r1 163K $0.4 / 1M tokens $2 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by deepseek