Magnum v4 72B

Text input Text output
Author's Description

This is a series of models designed to replicate the prose quality of the Claude 3 models, specifically Sonnet(https://openrouter.ai/anthropic/claude-3.5-sonnet) and Opus(https://openrouter.ai/anthropic/claude-3-opus). The model is fine-tuned on top of [Qwen2.5 72B](https://openrouter.ai/qwen/qwen-2.5-72b-instruct).

Key Specifications
Cost
$$$$
Context
16K
Parameters
72B
Released
Oct 21, 2024
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Seed Frequency Penalty Structured Outputs Top P Min P Response Format Temperature Stop Presence Penalty Max Tokens Logit Bias
Features

This model supports the following features:

Structured Outputs Response Format
Performance Summary

Magnum v4 72B, developed by anthracite-org and fine-tuned on Qwen2.5 72B, aims to replicate the prose quality of Claude 3 models. It demonstrates moderate speed performance, ranking in the 36th percentile, and offers moderate pricing, placing it in the 27th percentile across benchmarks. A standout feature is its exceptional reliability, achieving a 100% success rate with no technical failures. The model excels in specific areas, achieving perfect 100.0% accuracy in both Hallucinations (Baseline) and Ethics (Baseline) benchmarks. For hallucinations, it is the most accurate model at its price point and speed. Similarly, in ethics, it is the most accurate among models with comparable pricing and speed. It shows strong performance in General Knowledge (98.5% accuracy) and Email Classification (97.0% accuracy). Its primary weakness appears in Reasoning (Baseline), where it achieved 68.0% accuracy, indicating room for improvement in complex problem-solving. Overall, Magnum v4 72B presents a highly reliable and accurate option, particularly for tasks requiring ethical adherence and factual integrity, while maintaining competitive pricing.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $3
Completion $5

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Mancer 2
Mancer 2 | anthracite-org/magnum-v4-72b 16K $3 / 1M tokens $5 / 1M tokens
Featherless
Featherless | anthracite-org/magnum-v4-72b 16K $3 / 1M tokens $5 / 1M tokens
Mancer 2
Mancer 2 | anthracite-org/magnum-v4-72b 16K $3 / 1M tokens $5 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by anthracite-org