OpenAI: GPT-4o-mini

Image input File input Text input Text output
Author's Description

GPT-4o mini is OpenAI's newest model after [GPT-4 Omni](/models/openai/gpt-4o), supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than [GPT-3.5 Turbo](/models/openai/gpt-3.5-turbo). It maintains SOTA intelligence, while being significantly more cost-effective. GPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences [common leaderboards](https://arena.lmsys.org/). Check out the [launch announcement](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) to learn more. #multimodal

Key Specifications
Cost
$$
Context
128K
Parameters
8B (Rumoured)
Released
Jul 17, 2024
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Top Logprobs Stop Structured Outputs Logprobs Presence Penalty Frequency Penalty Top P Max Tokens Tool Choice Response Format Logit Bias Seed Temperature Tools
Features

This model supports the following features:

Response Format Tools Structured Outputs
Performance Summary

GPT-4o-mini, released on July 17, 2024, is OpenAI's advanced small model, supporting both text and image inputs with text outputs. It consistently ranks among the fastest models, performing in the 82nd percentile across benchmarks, and offers competitive pricing, typically providing cost-effective solutions in the 74th percentile. The model demonstrates exceptional reliability with a 100% success rate across all evaluated benchmarks, indicating minimal technical failures. In terms of performance, GPT-4o-mini excels in Ethics and General Knowledge, achieving perfect accuracy in Ethics and 99.5% in General Knowledge, often being the most accurate among models of comparable speed and price. It also shows strong performance in Coding (87.0% accuracy) and Email Classification (98.0% accuracy). However, its performance in Hallucinations (76.0% accuracy) suggests room for improvement in acknowledging uncertainty, and its scores in Mathematics (71.0%) and Reasoning (56.0%) indicate these areas are relative weaknesses compared to its other strengths. Despite these, its overall intelligence and cost-effectiveness make it a compelling option.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.15
Completion $0.6
Input Cache Read $0.075

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
OpenAI
OpenAI | openai/gpt-4o-mini 128K $0.15 / 1M tokens $0.6 / 1M tokens
Azure
Azure | openai/gpt-4o-mini 128K $0.15 / 1M tokens $0.6 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by openai