Author's Description
Llama 4 Scout 17B Instruct (16E) is a mixture-of-experts (MoE) language model developed by Meta, activating 17 billion parameters out of a total of 109B. It supports native multimodal input...
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Meta's Llama 4 Scout 17B Instruct (16E) demonstrates a balanced performance profile, excelling in cost-effectiveness and reliability while showing mixed results in accuracy across various tasks. The model performs among the fastest models, ranking in the 63rd percentile for speed, and offers competitive pricing, placing in the 77th percentile. Its reliability is notably strong, with a 93% success rate, indicating consistent and usable responses. Llama 4 Scout shows exceptional performance in Email Classification (99% accuracy, 91st percentile), highlighting its strength in categorization tasks. It also performs well in General Knowledge (97% accuracy) and Ethics (98% accuracy), suggesting a solid understanding of factual information and ethical principles. However, the model exhibits significant weaknesses in Mathematics (39% accuracy, 15th percentile) and Instruction Following (38.7% accuracy, 31st percentile), indicating challenges with complex logical operations and multi-step directives. Its hallucination rate is also a concern at 68% accuracy, suggesting a tendency to generate information rather than acknowledge uncertainty. While its Coding performance is moderate (79.5% accuracy), its Reasoning capabilities are average (58% accuracy).
Model Pricing
Current Pricing
| Feature | Price (per 1M tokens) |
|---|---|
| Prompt | $0.08 |
| Completion | $0.3 |
Price History
Available Endpoints
| Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
|---|---|---|---|---|
|
Lambda
|
Lambda | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
DeepInfra
|
DeepInfra | meta-llama/llama-4-scout-17b-16e-instruct | 327K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Kluster
|
Kluster | meta-llama/llama-4-scout-17b-16e-instruct | 131K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
GMICloud
|
GMICloud | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Parasail
|
Parasail | meta-llama/llama-4-scout-17b-16e-instruct | 158K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Cent-ML
|
Cent-ML | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Novita
|
Novita | meta-llama/llama-4-scout-17b-16e-instruct | 131K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Groq
|
Groq | meta-llama/llama-4-scout-17b-16e-instruct | 131K | $0.11 / 1M tokens | $0.34 / 1M tokens |
|
BaseTen
|
BaseTen | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Fireworks
|
Fireworks | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Together
|
Together | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Google
|
Google | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.25 / 1M tokens | $0.7 / 1M tokens |
|
SambaNova
|
SambaNova | meta-llama/llama-4-scout-17b-16e-instruct | 8K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Cerebras
|
Cerebras | meta-llama/llama-4-scout-17b-16e-instruct | 32K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
BaseTen
|
BaseTen | meta-llama/llama-4-scout-17b-16e-instruct | 1M | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Friendli
|
Friendli | meta-llama/llama-4-scout-17b-16e-instruct | 447K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
DeepInfra
|
DeepInfra | meta-llama/llama-4-scout-17b-16e-instruct | 327K | $0.08 / 1M tokens | $0.3 / 1M tokens |
|
Novita
|
Novita | meta-llama/llama-4-scout-17b-16e-instruct | 131K | $0.18 / 1M tokens | $0.59 / 1M tokens |
Benchmark Results
| Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
|---|
Other Models by meta-llama
|
|
Released | Params | Context |
|
Speed | Ability | Cost |
|---|---|---|---|---|---|---|---|
| Meta: Llama Guard 4 12B | Apr 29, 2025 | 12B | 163K |
Text input
Image input
Text output
|
— | ★ | $$ |
| Meta: Llama 4 Maverick | Apr 05, 2025 | 17B | 1M |
Text input
Image input
Text output
|
★★★★★ | ★★★ | $$ |
| Llama Guard 3 8B | Feb 12, 2025 | 8B | 131K |
Text input
Text output
|
★★ | ★ | $$ |
| Meta: Llama 3.3 70B Instruct | Dec 06, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★★ | $ |
| Meta: Llama 3.2 1B Instruct | Sep 24, 2024 | 1B | 131K |
Text input
Text output
|
★★ | ★ | $ |
| Meta: Llama 3.2 3B Instruct | Sep 24, 2024 | 3B | 131K |
Text input
Text output
|
★★★ | ★ | $ |
| Meta: Llama 3.2 11B Vision Instruct | Sep 24, 2024 | 11B | 128K |
Text input
Image input
Text output
|
★★★ | ★ | $$ |
| Meta: Llama 3.2 90B Vision Instruct Unavailable | Sep 24, 2024 | 90B | 131K |
Text input
Image input
Text output
|
★★★ | ★★ | $$$$ |
| Meta: Llama 3.1 405B (base) Unavailable | Aug 01, 2024 | 405B | 32K |
Text input
Text output
|
★ | ★ | $$$ |
| Meta: Llama 3.1 70B Instruct | Jul 22, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★ | $$ |
| Meta: Llama 3.1 405B Instruct Unavailable | Jul 22, 2024 | 405B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
| Meta: Llama 3.1 8B Instruct | Jul 22, 2024 | 8B | 131K |
Text input
Text output
|
★★★ | ★★ | $ |
| Meta: LlamaGuard 2 8B Unavailable | May 12, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★ | $$ |
| Meta: Llama 3 8B Instruct | Apr 17, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★★ | $ |
| Meta: Llama 3 70B Instruct | Apr 17, 2024 | 70B | 8K |
Text input
Text output
|
★★★★ | ★★ | $$ |
| Meta: Llama 2 70B Chat Unavailable | Jun 19, 2023 | 70B | 4K |
Text input
Text output
|
— | — | $$$$ |