Dolphin 2.9.2 Mixtral 8x22B 🐬

Text input Text output Unavailable
Author's Description

Dolphin 2.9 is designed for instruction following, conversational, and coding. This model is a finetune of [Mixtral 8x22B Instruct](/models/mistralai/mixtral-8x22b-instruct). It features a 64k context length and was fine-tuned with a 16k sequence length using ChatML templates. This model is a successor to [Dolphin Mixtral 8x7B](/models/cognitivecomputations/dolphin-mixtral-8x7b). The model is uncensored and is stripped of alignment and bias. It requires an external alignment layer for ethical use. Users are cautioned to use this highly compliant model responsibly, as detailed in a blog post about uncensored models at [erichartford.com/uncensored-models](https://erichartford.com/uncensored-models). #moe #uncensored

Key Specifications
Cost
$$$$
Context
16K
Parameters
22B
Released
Jun 07, 2024
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Logit Bias Stop Seed Min P Top P Max Tokens Frequency Penalty Temperature Presence Penalty
Performance Summary

Dolphin 2.9.2 Mixtral 8x22B, a fine-tuned successor to Dolphin Mixtral 8x7B, demonstrates moderate speed performance, ranking in the 30th percentile across benchmarks. Its pricing is also moderate, positioned in the 37th percentile. A standout feature is its exceptional reliability, achieving a 100% success rate across all evaluated benchmarks, indicating consistent and stable operation without technical failures. In terms of specific performance, the model shows varied capabilities. It achieved 99.0% accuracy in the Ethics (Baseline) benchmark, placing it in the 57th percentile, suggesting a strong understanding of ethical principles. Email Classification (Baseline) yielded 95.0% accuracy (33rd percentile), indicating reasonable proficiency in categorization. However, its performance in General Knowledge (Baseline) was 71.0% accuracy (21st percentile) and Reasoning (Baseline) was 40.0% accuracy (27th percentile), highlighting these as areas of relative weakness. The model is uncensored and designed for instruction following, conversation, and coding, requiring external alignment for ethical use.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.9
Completion $0.9

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Novita
Novita | cognitivecomputations/dolphin-mixtral-8x22b 16K $0.9 / 1M tokens $0.9 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by cognitivecomputations