Dolphin3.0 R1 Mistral 24B

Text input Text output Free Option
Author's Description

Dolphin 3.0 R1 is the next generation of the Dolphin series of instruct-tuned models. Designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases. The R1 version has been trained for 3 epochs to reason using 800k reasoning traces from the Dolphin-R1 dataset. Dolphin aims to be a general purpose reasoning instruct model, similar to the models behind ChatGPT, Claude, Gemini. Part of the [Dolphin 3.0 Collection](https://huggingface.co/collections/cognitivecomputations/dolphin-30-677ab47f73d7ff66743979a3) Curated and trained by [Eric Hartford](https://huggingface.co/ehartford), [Ben Gitter](https://huggingface.co/bigstorm), [BlouseJury](https://huggingface.co/BlouseJury) and [Cognitive Computations](https://huggingface.co/cognitivecomputations)

Key Specifications
Cost
$$
Context
32K
Parameters
24B
Released
Feb 13, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Stop Presence Penalty Logit Bias Temperature Seed Frequency Penalty Max Tokens Include Reasoning Top P Min P Reasoning Logprobs Top Logprobs
Features

This model supports the following features:

Reasoning
Performance Summary

Dolphin 3.0 R1 Mistral 24B, a general-purpose instruct-tuned model, demonstrates exceptional speed, consistently ranking among the fastest models available. Its pricing is also highly competitive, placing it in the 92nd percentile for cost-efficiency across benchmarks. While excelling in speed and affordability, the model's accuracy across various benchmarks indicates significant room for improvement. Its performance in Instruction Following (27.2%), Coding (18.2%), and Reasoning (26.0%) is notably low, placing it in the lower percentiles for these critical categories. Even in Email Classification (89.0%) and General Knowledge (76.0%), where accuracy is higher, its percentile rankings (15th and 23rd respectively) suggest it lags behind many peers. The model's primary strength lies in its rapid processing and cost-effectiveness, making it an attractive option for applications where speed and budget are paramount. However, its current accuracy levels across core reasoning and instruction-following tasks represent a significant weakness, limiting its utility for scenarios demanding high precision and complex problem-solving.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.01
Completion $0.0341

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Chutes
Chutes | cognitivecomputations/dolphin3.0-r1-mistral-24b 32K $0.01 / 1M tokens $0.0341 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by cognitivecomputations