OpenAI: o1-mini (2024-09-12)

Text input Text output
Author's Description

The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the [launch announcement](https://openai.com/o1). Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

Key Specifications
Cost
$$$$$
Context
128K
Parameters
100B (Rumoured)
Released
Sep 11, 2024
Ability
Supported Parameters

This model supports the following parameters:

Max Tokens Seed
Performance Summary

OpenAI's o1-mini (2024-09-12) model, part of the new o1 family, demonstrates exceptional speed and reliability, positioning it as a promising experimental offering. It consistently ranks among the fastest models, achieving an Infinityth percentile across three benchmarks, indicating superior processing speed. While no cost data is available, suggesting potential free tier usage, its reliability is outstanding, scoring 100th percentile across all benchmarks. This signifies that the model consistently provides usable responses without technical failures, a critical factor for development and testing. However, despite its strong foundational performance in speed and reliability, the o1-mini model currently exhibits significant limitations in accuracy across all tested categories. It achieved 0.0% accuracy in Email Classification, Reasoning, and General Knowledge benchmarks, with considerable durations for each test (196963ms, 72425ms, and 314610ms respectively). This indicates that while the model is fast and stable, its current iteration does not yet possess the functional accuracy required for practical application, aligning with its experimental status and the provider's note that it is "not suitable for production use-cases." Its stated optimization for STEM tasks and PhD-level accuracy on benchmarks in physics, chemistry, and biology are not reflected in these general baseline tests.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1.1
Completion $4.4
Input Cache Read $0.55

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
OpenAI
OpenAI | openai/o1-mini-2024-09-12 128K $1.1 / 1M tokens $4.4 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by openai