OpenAI: o1-mini

Text input Text output
Author's Description

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Key Specifications
Cost
$$$$$
Context
128K
Parameters
100B (Rumoured)
Released
Sep 11, 2024
Ability
Supported Parameters

This model supports the following parameters:

Max Tokens Seed
Performance Summary

OpenAI's o1-mini, created on September 11, 2024, is an experimental model designed for STEM-related tasks, emphasizing extended "thinking time" before responding. It consistently ranks among the fastest models, demonstrating exceptional speed across benchmarks. Price data is currently unavailable, suggesting potential free-tier usage. The model exhibits outstanding reliability, achieving a 100% success rate across benchmarks, indicating it consistently provides usable responses without technical failures. Despite its intended specialization and high reliability, the o1-mini currently shows significant weaknesses in benchmark performance. It achieved 0.0% accuracy in both the "General Knowledge (Baseline)" and "Email Classification (Baseline)" tests, with very long response durations (311085ms and 200955ms respectively). This suggests that while the model is fast in terms of processing, its current accuracy for these general tasks is non-existent. Its strength lies in its speed and technical reliability, but its current inability to provide correct answers across diverse categories, even with extended processing times, is a critical limitation. As an experimental model, these performance metrics are likely to evolve.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1.1
Completion $4.4
Input Cache Read $0.55

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
OpenAI
OpenAI | openai/o1-mini 128K $1.1 / 1M tokens $4.4 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by openai