Author's Description
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Meta's Llama 4 Maverick 17B Instruct (128E) demonstrates strong overall performance, particularly excelling in reliability with a 100% success rate across all benchmarks, indicating consistent and usable responses. The model generally performs among the fastest models, ranking in the 76th percentile for speed, though individual benchmark durations vary. It also offers competitive pricing, typically providing cost-effective solutions in the 68th percentile. In terms of specific capabilities, Maverick shows exceptional performance in Ethics and General Knowledge, achieving perfect accuracy in Ethics and near-perfect in General Knowledge (99.3%), often being the most accurate model at its price point or speed. Its Reasoning abilities are also strong, scoring 82.0% accuracy. Email Classification is another area of strength with 98.0% accuracy. However, the model exhibits notable weaknesses in Instruction Following, with a relatively low accuracy of 30.3%, suggesting challenges with complex, multi-step directives. Coding performance is moderate at 79.5% accuracy, placing it in the 47th percentile. Overall, Maverick is a highly reliable and cost-effective multimodal model with significant strengths in knowledge-based and ethical reasoning tasks, though its instruction following capabilities could be improved.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.15 |
Completion | $0.6 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
DeepInfra
|
DeepInfra | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Parasail
|
Parasail | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.85 / 1M tokens |
Kluster
|
Kluster | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Novita
|
Novita | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.17 / 1M tokens | $0.85 / 1M tokens |
Lambda
|
Lambda | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.18 / 1M tokens | $0.6 / 1M tokens |
BaseTen
|
BaseTen | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Cent-ML
|
Cent-ML | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Groq
|
Groq | meta-llama/llama-4-maverick-17b-128e-instruct | 131K | $0.2 / 1M tokens | $0.6 / 1M tokens |
NCompass
|
NCompass | meta-llama/llama-4-maverick-17b-128e-instruct | 400K | $0.15 / 1M tokens | $0.6 / 1M tokens |
Fireworks
|
Fireworks | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.22 / 1M tokens | $0.88 / 1M tokens |
GMICloud
|
GMICloud | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.25 / 1M tokens | $0.8 / 1M tokens |
Together
|
Together | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.27 / 1M tokens | $0.85 / 1M tokens |
Google
|
Google | meta-llama/llama-4-maverick-17b-128e-instruct | 524K | $0.35 / 1M tokens | $1.15 / 1M tokens |
DeepInfra
|
DeepInfra | meta-llama/llama-4-maverick-17b-128e-instruct | 8K | $0.5 / 1M tokens | $0.5 / 1M tokens |
SambaNova
|
SambaNova | meta-llama/llama-4-maverick-17b-128e-instruct | 131K | $0.63 / 1M tokens | $1.8 / 1M tokens |
BaseTen
|
BaseTen | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.19 / 1M tokens | $0.72 / 1M tokens |
Friendli
|
Friendli | meta-llama/llama-4-maverick-17b-128e-instruct | 131K | $0.2 / 1M tokens | $0.6 / 1M tokens |
Cerebras
|
Cerebras | meta-llama/llama-4-maverick-17b-128e-instruct | 32K | $0.2 / 1M tokens | $0.6 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by meta-llama
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Meta: Llama Guard 4 12B | Apr 29, 2025 | 12B | 163K |
Text input
Image input
Text output
|
★★★★ | ★ | $ |
Meta: Llama 4 Scout | Apr 05, 2025 | 17B | 1M |
Text input
Image input
Text output
|
★★★★ | ★★★ | $$ |
Llama Guard 3 8B | Feb 12, 2025 | 8B | 131K |
Text input
Text output
|
★ | ★ | $ |
Meta: Llama 3.3 70B Instruct | Dec 06, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★★★ | $ |
Meta: Llama 3.2 1B Instruct | Sep 24, 2024 | 1B | 131K |
Text input
Text output
|
★★ | ★ | $ |
Meta: Llama 3.2 3B Instruct | Sep 24, 2024 | 3B | 131K |
Text input
Text output
|
★★★ | ★ | $ |
Meta: Llama 3.2 11B Vision Instruct | Sep 24, 2024 | 11B | 128K |
Text input
Image input
Text output
|
★★★ | ★★ | $$ |
Meta: Llama 3.2 90B Vision Instruct | Sep 24, 2024 | 90B | 131K |
Text input
Image input
Text output
|
★★★ | ★★ | $$$$ |
Meta: Llama 3.1 405B (base) | Aug 01, 2024 | 405B | 32K |
Text input
Text output
|
★ | ★ | $$$$ |
Meta: Llama 3.1 70B Instruct | Jul 22, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★ | $$ |
Meta: Llama 3.1 405B Instruct | Jul 22, 2024 | 405B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$$ |
Meta: Llama 3.1 8B Instruct | Jul 22, 2024 | 8B | 131K |
Text input
Text output
|
★★★ | ★★★ | $ |
Meta: LlamaGuard 2 8B | May 12, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★ | $$ |
Meta: Llama 3 8B Instruct | Apr 17, 2024 | 8B | 8K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: Llama 3 70B Instruct | Apr 17, 2024 | 70B | 8K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 2 70B Chat Unavailable | Jun 19, 2023 | 70B | 4K |
Text input
Text output
|
— | — | $$$$ |