OpenAI: o1-mini (2024-09-12)

Text input Text output
Author's Description

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Key Specifications
Cost
$$$$$
Context
128K
Parameters
100B (Rumoured)
Released
Sep 11, 2024
Ability
Supported Parameters

This model supports the following parameters:

Max Tokens Seed
Performance Summary

The OpenAI o1-mini (2024-09-12) model, an experimental offering from OpenAI, demonstrates exceptional speed, consistently ranking among the fastest models available. Price competitiveness cannot be assessed due to the absence of cost data, suggesting potential free-tier usage. The model exhibits outstanding reliability, achieving a 100% success rate across benchmarks, indicating a robust and stable API. Despite its strong foundational performance in speed and reliability, the o1-mini model currently shows significant weaknesses in benchmark accuracy. It achieved 0.0% accuracy on both the General Knowledge (Baseline) and Email Classification (Baseline) tests. This indicates that while the model is designed for STEM-related tasks and boasts PhD-level accuracy in physics, chemistry, and biology according to its description, its current iteration struggles with general knowledge and classification tasks. Its primary strength lies in its rapid response times and consistent operational stability, but its current inability to provide correct answers across diverse benchmarks is a notable limitation. This model is explicitly noted as experimental and not suitable for production use, which aligns with its current accuracy performance.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1.1
Completion $4.4
Input Cache Read $0.55

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
OpenAI
OpenAI | openai/o1-mini-2024-09-12 128K $1.1 / 1M tokens $4.4 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by openai