Author's Description
Mercury Coder is the first diffusion large language model (dLLM). Applying a breakthrough discrete diffusion approach, the model runs 5-10x faster than even speed optimized models like Claude 3.5 Haiku and GPT-4o Mini while matching their performance. Mercury Coder's speed means that developers can stay in the flow while coding, enjoying rapid chat-based iteration and responsive code completion suggestions. On Copilot Arena, Mercury Coder ranks 1st in speed and ties for 2nd in quality. Read more in the [blog post here](https://www.inceptionlabs.ai/introducing-mercury).
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Inception's Mercury Coder, the first diffusion large language model (dLLM), demonstrates exceptional speed and competitive pricing. Created on April 30, 2025, it consistently ranks among the fastest models and offers highly competitive pricing across all benchmarks. The model exhibits strong reliability with an 84% success rate, indicating consistent and usable responses. Mercury Coder excels in speed, achieving top-tier performance in Hallucinations, General Knowledge, Ethics, Email Classification, and Coding benchmarks. Notably, it is the #1 speed champion in Ethics, delivering near-perfect accuracy at the highest speed. Its performance in Hallucinations (96.0% accuracy) and General Knowledge (93.5% accuracy) is solid, though not top-ranked for accuracy. As a "Coder" model, its 80.0% accuracy in the Coding benchmark is a significant strength. However, a notable weakness is its 0.0% accuracy in the Mathematics benchmark, suggesting a significant area for improvement. Instruction Following also presents a moderate challenge with 54.0% accuracy. Despite these, Mercury Coder's breakthrough discrete diffusion approach allows it to run 5-10x faster than optimized competitors while matching their performance, making it highly effective for rapid chat-based iteration and responsive code completion.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.25 |
Completion | $1 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Inception
|
Inception | inception/mercury-coder-small-beta | 128K | $0.25 / 1M tokens | $1 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by inception
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Inception: Mercury | Jun 26, 2025 | — | 128K |
Text input
Text output
|
★★★★★ | ★★ | $$$$$ |