Anthropic: Claude Opus 4.5

Image input Text input File input Text output
Author's Description

Claude Opus 4.5 is Anthropic’s frontier reasoning model optimized for complex software engineering, agentic workflows, and long-horizon computer use. It offers strong multimodal capabilities, competitive performance across real-world coding and reasoning benchmarks, and improved robustness to prompt injection. The model is designed to operate efficiently across varied effort levels, enabling developers to trade off speed, depth, and token usage depending on task requirements. It comes with a new parameter to control token efficiency, which can be accessed using the OpenRouter Verbosity parameter with low, medium, or high. Opus 4.5 supports advanced tool use, extended context management, and coordinated multi-agent setups, making it well-suited for autonomous research, debugging, multi-step planning, and spreadsheet/browser manipulation. It delivers substantial gains in structured reasoning, execution reliability, and alignment compared to prior Opus generations, while reducing token overhead and improving performance on long-running tasks.

Key Specifications
Cost
+$$$$$
Context
200K
Released
Nov 24, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Stop Max Tokens Reasoning Tool Choice Temperature Include Reasoning Tools
Features

This model supports the following features:

Tools Reasoning
Performance Summary

Claude Opus 4.5 demonstrates competitive response times, performing among the faster models with a 45th percentile speed ranking. However, it is positioned at premium pricing levels, ranking in the 7th percentile for cost-effectiveness. A standout feature is its exceptional reliability, achieving a perfect 100% success rate across all benchmarks, indicating robust technical stability. The model excels in several critical areas. It achieves perfect accuracy in General Knowledge and Ethics, often being the most accurate model at its price point and speed. Its performance in Coding (95% accuracy), Reasoning (98% accuracy), and Mathematics (96% accuracy) is also highly impressive, placing it in the top echelons for these complex tasks. Instruction Following is strong at 77% accuracy, ranking in the 88th percentile. While its Hallucinations score of 98% accuracy is good, it's not perfect, suggesting a minor area for improvement. Email Classification is solid at 98% accuracy, though it falls in the middle of the pack for this specific task. Overall, Opus 4.5 is a powerful, reliable model particularly suited for demanding analytical and generative tasks.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $5
Completion $25
Input Cache Read $0.5
Input Cache Write $6.25
Web Search $10000

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Google
Google | anthropic/claude-4.5-opus-20251124 200K $5 / 1M tokens $25 / 1M tokens
Anthropic
Anthropic | anthropic/claude-4.5-opus-20251124 200K $5 / 1M tokens $25 / 1M tokens
Amazon Bedrock
Amazon Bedrock | anthropic/claude-4.5-opus-20251124 200K $5 / 1M tokens $25 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by anthropic