OpenAI: o4 Mini High

Image input File input Text input Text output
Author's Description

OpenAI o4-mini-high is the same model as [o4-mini](/openai/o4-mini) with reasoning_effort set to high. OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains. Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where latency or cost is critical. Thanks to its efficient architecture and refined reinforcement learning training, o4-mini can chain tools, generate structured outputs, and solve multi-step tasks with minimal delay—often in under a minute.

Key Specifications
Cost
$$$$$
Context
200K
Released
Apr 16, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Tool Choice Max Tokens Structured Outputs Tools Seed Response Format
Features

This model supports the following features:

Structured Outputs Response Format Tools
Performance Summary

OpenAI o4-mini-high demonstrates moderate speed performance, ranking in the 32nd percentile across benchmarks, indicating it is not among the fastest models. Its pricing tends to be at premium levels, positioned in the 16th percentile. However, it exhibits exceptional reliability, achieving a perfect 100th percentile, meaning it consistently provides usable responses with minimal technical failures. Across benchmarks, o4-mini-high shows strong capabilities. It achieved 93.0% accuracy in Coding, 71.0% in Instruction Following, and an impressive 99.0% in Email Classification. Notably, it achieved perfect 100.0% accuracy in both Reasoning and General Knowledge, standing out as the most accurate model at its price point and among models of similar speed in these categories. While its accuracy in Ethics was 98.0%, its percentile ranking (41st) suggests other models perform similarly or better in this specific area. Its key strengths lie in its high accuracy across complex reasoning, general knowledge, and classification tasks, coupled with its robust reliability. A potential area for improvement is its speed, as it does not lead in this aspect.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1.1
Completion $4.4
Input Cache Read $0.275

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
OpenAI
OpenAI | openai/o4-mini-high-2025-04-16 200K $1.1 / 1M tokens $4.4 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by openai