DeepSeek: R1 Distill Qwen 14B

Text input Text output
Author's Description

DeepSeek R1 Distill Qwen 14B is a distilled large language model based on [Qwen 2.5 14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B), using outputs from [DeepSeek R1](/deepseek/deepseek-r1). It outperforms OpenAI's o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. Other benchmark results include: - AIME 2024 pass@1: 69.7 - MATH-500 pass@1: 93.9 - CodeForces Rating: 1481 The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Key Specifications
Cost
$$$
Context
32K
Parameters
14B
Released
Jan 29, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Include Reasoning Stop Max Tokens Top P Frequency Penalty Reasoning Seed Temperature Presence Penalty
Features

This model supports the following features:

Reasoning
Performance Summary

DeepSeek R1 Distill Qwen 14B, a distilled model leveraging DeepSeek R1 outputs, demonstrates strong performance across several key metrics. It consistently ranks among the fastest models, achieving an Infinityth percentile in speed across nine benchmarks. Its pricing is competitive, placing it in the 58th percentile across eight benchmarks. Furthermore, the model exhibits high reliability with a 91% success rate, indicating consistent operational stability. In terms of specific benchmark performance, the model shows notable strength in Coding, achieving 93.0% accuracy (88th percentile), and demonstrates solid capabilities in Mathematics (78.0% accuracy, 41st percentile) and Reasoning (66.0% accuracy, 53rd percentile). Its AIME 2024 pass@1 of 69.7 and MATH-500 pass@1 of 93.9 further underscore its mathematical and problem-solving prowess. However, the model shows weaknesses in Hallucinations (78.0% accuracy, 22nd percentile), General Knowledge (77.5% accuracy, 23rd percentile), and Ethics (87.5% accuracy, 22nd percentile), where its accuracy falls into lower percentiles. Instruction Following also presents a mixed picture, with one test showing 44.0% accuracy (41st percentile) and another indicating 0.0% accuracy, suggesting potential inconsistencies or specific challenges in complex instruction sets.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.15
Completion $0.15

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Novita
Novita | deepseek/deepseek-r1-distill-qwen-14b 32K $0.15 / 1M tokens $0.15 / 1M tokens
GMICloud
GMICloud | deepseek/deepseek-r1-distill-qwen-14b 131K $0.15 / 1M tokens $0.15 / 1M tokens
Together
Together | deepseek/deepseek-r1-distill-qwen-14b 131K $1.6 / 1M tokens $1.6 / 1M tokens
Novita
Novita | deepseek/deepseek-r1-distill-qwen-14b 32K $0.15 / 1M tokens $0.15 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by deepseek