Author's Description
Nous Hermes 2 Mixtral 8x7B DPO is the new flagship Nous Research model trained over the [Mixtral 8x7B MoE LLM](/models/mistralai/mixtral-8x7b). The model was trained on over 1,000,000 entries of primarily [GPT-4](/models/openai/gpt-4) generated data, as well as other high quality data from open datasets across the AI landscape, achieving state of the art performance on a variety of tasks. #moe
Key Specifications
Supported Parameters
This model supports the following parameters:
Performance Summary
Nous Hermes 2 Mixtral 8x7B DPO demonstrates competitive performance in terms of both speed and cost, ranking in the 43rd and 45th percentile respectively across five benchmarks. This indicates it offers a balanced profile for operational efficiency. In terms of accuracy, the model exhibits varied performance across different categories. It achieves its highest accuracy in General Knowledge at 86.0%, placing it in the 26th percentile, suggesting a solid foundation in diverse factual domains. However, its performance in other areas is notably lower. For instance, in Ethics and Email Classification, it scores 53.0% (15th percentile) and 89.0% (13th percentile) respectively, indicating potential areas for improvement in nuanced ethical reasoning and precise categorization. A significant weakness is observed in Coding, where it scores only 7.0% accuracy (13th percentile), suggesting this is not a strong suit for the model. Instruction Following also presents a challenge, with an accuracy of 28.0% (29th percentile). Overall, Nous Hermes 2 Mixtral 8x7B DPO is a cost-effective and reasonably fast model with a strong grasp of general knowledge. Its primary areas for development lie in specialized tasks such as coding, ethical reasoning, and complex instruction following.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.6 |
Completion | $0.6 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Together
|
Together | nousresearch/nous-hermes-2-mixtral-8x7b-dpo | 32K | $0.6 / 1M tokens | $0.6 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by nousresearch
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Nous: Hermes 4 70B | Aug 26, 2025 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★ | $$ |
Nous: Hermes 4 405B | Aug 26, 2025 | 405B | 131K |
Text input
Text output
|
★★★ | ★★★★ | $$$$ |
Nous: DeepHermes 3 Mistral 24B Preview | May 09, 2025 | 24B | 32K |
Text input
Text output
|
★★★★ | ★★★ | $$ |
Nous: DeepHermes 3 Llama 3 8B Preview | Feb 27, 2025 | 8B | 131K |
Text input
Text output
|
— | — | $ |
Nous: Hermes 3 70B Instruct | Aug 17, 2024 | 70B | 12K |
Text input
Text output
|
★★★★ | ★★★ | $$ |
Nous: Hermes 3 405B Instruct | Aug 15, 2024 | 405B | 131K |
Text input
Text output
|
★★★ | ★★★★ | $$$$ |
NousResearch: Hermes 2 Pro - Llama-3 8B | May 26, 2024 | 8B | 8K |
Text input
Text output
|
★★★★★ | ★★ | $ |