Meta: Llama 4 Maverick

Text input Image input Text output
Author's Description

Llama 4 Maverick 17B Instruct (128E) is a high-capacity multimodal language model from Meta, built on a mixture-of-experts (MoE) architecture with 128 experts and 17 billion active parameters per forward pass (400B total). It supports multilingual text and image input, and produces multilingual text and code output across 12 supported languages. Optimized for vision-language tasks, Maverick is instruction-tuned for assistant-like behavior, image reasoning, and general-purpose multimodal interaction. Maverick features early fusion for native multimodality and a 1 million token context window. It was trained on a curated mixture of public, licensed, and Meta-platform data, covering ~22 trillion tokens, with a knowledge cutoff in August 2024. Released on April 5, 2025 under the Llama 4 Community License, Maverick is suited for research and commercial applications requiring advanced multimodal understanding and high model throughput.

Key Specifications
Cost
$$$
Context
1M
Parameters
17B
Released
Apr 05, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Stop Presence Penalty Top P Temperature Seed Min P Response Format Frequency Penalty Max Tokens
Features

This model supports the following features:

Response Format
Performance Summary

Meta's Llama 4 Maverick 17B Instruct (128E) demonstrates strong overall performance, particularly excelling in reliability with a 100% success rate across all benchmarks, indicating consistent and usable responses. The model generally performs among the fastest models, ranking in the 76th percentile for speed, though individual benchmark durations vary. It also offers competitive pricing, typically providing cost-effective solutions in the 68th percentile. In terms of specific capabilities, Maverick shows exceptional performance in Ethics and General Knowledge, achieving perfect accuracy in Ethics and near-perfect in General Knowledge (99.3%), often being the most accurate model at its price point or speed. Its Reasoning abilities are also strong, scoring 82.0% accuracy. Email Classification is another area of strength with 98.0% accuracy. However, the model exhibits notable weaknesses in Instruction Following, with a relatively low accuracy of 30.3%, suggesting challenges with complex, multi-step directives. Coding performance is moderate at 79.5% accuracy, placing it in the 47th percentile. Overall, Maverick is a highly reliable and cost-effective multimodal model with significant strengths in knowledge-based and ethical reasoning tasks, though its instruction following capabilities could be improved.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.15
Completion $0.6

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
DeepInfra
DeepInfra | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.15 / 1M tokens $0.6 / 1M tokens
Parasail
Parasail | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.15 / 1M tokens $0.85 / 1M tokens
Kluster
Kluster | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.15 / 1M tokens $0.6 / 1M tokens
Novita
Novita | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.17 / 1M tokens $0.85 / 1M tokens
Lambda
Lambda | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.18 / 1M tokens $0.6 / 1M tokens
BaseTen
BaseTen | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.15 / 1M tokens $0.6 / 1M tokens
Cent-ML
Cent-ML | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.15 / 1M tokens $0.6 / 1M tokens
Groq
Groq | meta-llama/llama-4-maverick-17b-128e-instruct 131K $0.2 / 1M tokens $0.6 / 1M tokens
NCompass
NCompass | meta-llama/llama-4-maverick-17b-128e-instruct 400K $0.15 / 1M tokens $0.6 / 1M tokens
Fireworks
Fireworks | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.22 / 1M tokens $0.88 / 1M tokens
GMICloud
GMICloud | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.25 / 1M tokens $0.8 / 1M tokens
Together
Together | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.27 / 1M tokens $0.85 / 1M tokens
Google
Google | meta-llama/llama-4-maverick-17b-128e-instruct 524K $0.35 / 1M tokens $1.15 / 1M tokens
DeepInfra
DeepInfra | meta-llama/llama-4-maverick-17b-128e-instruct 8K $0.5 / 1M tokens $0.5 / 1M tokens
SambaNova
SambaNova | meta-llama/llama-4-maverick-17b-128e-instruct 131K $0.63 / 1M tokens $1.8 / 1M tokens
BaseTen
BaseTen | meta-llama/llama-4-maverick-17b-128e-instruct 1M $0.19 / 1M tokens $0.72 / 1M tokens
Friendli
Friendli | meta-llama/llama-4-maverick-17b-128e-instruct 131K $0.2 / 1M tokens $0.6 / 1M tokens
Cerebras
Cerebras | meta-llama/llama-4-maverick-17b-128e-instruct 32K $0.2 / 1M tokens $0.6 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by meta-llama