Author's Description
Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Meta's Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model demonstrating strong overall performance, particularly in reliability and certain specialized tasks. The model consistently performs among the fastest, ranking in the 76th percentile for speed across benchmarks, and offers competitive pricing, placing in the 64th percentile. Notably, Maverick exhibits exceptional reliability with a 100% success rate across all evaluated benchmarks, indicating consistent and stable operation. In terms of specific performance, Maverick excels in Ethics and Mathematics, achieving perfect accuracy in Ethics and 94.5% in Mathematics, ranking in the 94th and 92nd percentiles respectively, often being the most accurate among models of comparable speed or price. It also shows strong performance in General Knowledge (99.3% accuracy) and Reasoning (80.0% accuracy). However, the model displays a notable weakness in Instruction Following, with only 30.3% accuracy, placing it in the 31st percentile. Its hallucination rate, while not the worst, is also a point for improvement at 88.0% accuracy (meaning 12% hallucination rate). Coding performance is moderate at 79.5%. Overall, Maverick is a robust model for multimodal understanding and generation, particularly suited for applications requiring high reliability and strong ethical or mathematical reasoning, though its instruction following capabilities may require further refinement.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.15 |
Completion | $0.6 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
DeepInfra
|
DeepInfra | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Parasail
|
Parasail | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.85 / 1M tokens |
Kluster
|
Kluster | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Novita
|
Novita | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.17 / 1M tokens | $0.85 / 1M tokens |
Lambda
|
Lambda | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
BaseTen
|
BaseTen | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Cent-ML
|
Cent-ML | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Groq
|
Groq | meta-llama/llama-4-maverick-17b-128e-instruct | 131K | $0.2 / 1M tokens | $0.6 / 1M tokens |
NCompass
|
NCompass | meta-llama/llama-4-maverick-17b-128e-instruct | 400K | $0.15 / 1M tokens | $0.6 / 1M tokens |
Fireworks
|
Fireworks | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.22 / 1M tokens | $0.88 / 1M tokens |
GMICloud
|
GMICloud | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.15 / 1M tokens | $0.6 / 1M tokens |
Together
|
Together | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.27 / 1M tokens | $0.85 / 1M tokens |
Google
|
Google | meta-llama/llama-4-maverick-17b-128e-instruct | 524K | $0.35 / 1M tokens | $1.15 / 1M tokens |
DeepInfra
|
DeepInfra | meta-llama/llama-4-maverick-17b-128e-instruct | 8K | $0.5 / 1M tokens | $0.5 / 1M tokens |
SambaNova
|
SambaNova | meta-llama/llama-4-maverick-17b-128e-instruct | 131K | $0.63 / 1M tokens | $1.8 / 1M tokens |
BaseTen
|
BaseTen | meta-llama/llama-4-maverick-17b-128e-instruct | 1M | $0.19 / 1M tokens | $0.72 / 1M tokens |
Friendli
|
Friendli | meta-llama/llama-4-maverick-17b-128e-instruct | 131K | $0.2 / 1M tokens | $0.6 / 1M tokens |
Cerebras
|
Cerebras | meta-llama/llama-4-maverick-17b-128e-instruct | 32K | $0.2 / 1M tokens | $0.6 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by meta-llama
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Meta: Llama Guard 4 12B | Apr 29, 2025 | 12B | 163K |
Text input
Image input
Text output
|
— | ★ | $$ |
Meta: Llama 4 Scout | Apr 05, 2025 | 17B | 327K |
Text input
Image input
Text output
|
★★★★ | ★★ | $$ |
Llama Guard 3 8B | Feb 12, 2025 | 8B | 131K |
Text input
Text output
|
★★ | ★ | $$ |
Meta: Llama 3.3 70B Instruct | Dec 06, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★★ | $ |
Meta: Llama 3.2 1B Instruct | Sep 24, 2024 | 1B | 131K |
Text input
Text output
|
★★ | ★ | $ |
Meta: Llama 3.2 3B Instruct | Sep 24, 2024 | 3B | 131K |
Text input
Text output
|
★★★ | ★ | $ |
Meta: Llama 3.2 11B Vision Instruct | Sep 24, 2024 | 11B | 128K |
Text input
Image input
Text output
|
★★ | ★★ | $$ |
Meta: Llama 3.2 90B Vision Instruct | Sep 24, 2024 | 90B | 131K |
Text input
Image input
Text output
|
★★★ | ★★ | $$$$ |
Meta: Llama 3.1 405B (base) | Aug 01, 2024 | 405B | 32K |
Text input
Text output
|
★ | ★ | $$$ |
Meta: Llama 3.1 70B Instruct | Jul 22, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★ | $$ |
Meta: Llama 3.1 405B Instruct | Jul 22, 2024 | 405B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 3.1 8B Instruct | Jul 22, 2024 | 8B | 131K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: LlamaGuard 2 8B | May 12, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★ | $$ |
Meta: Llama 3 8B Instruct | Apr 17, 2024 | 8B | 8K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: Llama 3 70B Instruct | Apr 17, 2024 | 70B | 8K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 2 70B Chat Unavailable | Jun 19, 2023 | 70B | 4K |
Text input
Text output
|
— | — | $$$$ |