OpenAI: o4 Mini High

Image input File input Text input Text output
Author's Description

OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high. OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains. Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where latency or cost is critical. Thanks to its efficient architecture and refined reinforcement learning training, o4-mini can chain tools, generate structured outputs, and solve multi-step tasks with minimal delay—often in under a minute.

Key Specifications
Cost
$$$$$
Context
200K
Released
Apr 16, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Reasoning Structured Outputs Response Format Seed Max Tokens Tool Choice Tools Include Reasoning
Features

This model supports the following features:

Response Format Tools Reasoning Structured Outputs
Performance Summary

OpenAI's o4 Mini High, released on April 16, 2025, is a compact reasoning model designed for efficiency and strong multimodal capabilities, with its reasoning effort set to high. It exhibits moderate speed performance, ranking in the 31st percentile, and is positioned at premium pricing levels, in the 14th percentile. A standout feature is its exceptional reliability, achieving a 100% success rate across all benchmarks. The model demonstrates perfect accuracy in General Knowledge, making it the most accurate model at its price point and among models of similar speed. It also excels in Reasoning (98% accuracy, 92nd percentile) and Coding (93% accuracy, 84th percentile), outperforming many peers. Its Email Classification accuracy is strong at 99% (76th percentile). While its Ethics performance is respectable at 98% (38th percentile), its Instruction Following accuracy is 71% (82nd percentile). A notable weakness is its performance in the Hallucinations benchmark, where it achieved only 62% accuracy (12th percentile), indicating a tendency to not appropriately acknowledge uncertainty. Overall, o4 Mini High is a highly reliable model with strong reasoning and coding capabilities, particularly suited for high-throughput scenarios where accuracy in complex tasks is critical, despite its premium cost and moderate speed.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1.1
Completion $4.4
Input Cache Read $0.275
Web Search $10000

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
OpenAI
OpenAI | openai/o4-mini-high-2025-04-16 200K $1.1 / 1M tokens $4.4 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by openai