Author's Description
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).
Key Specifications
Supported Parameters
This model supports the following parameters:
Performance Summary
Meta's Llama 3.1 8B Instruct model, released on July 22, 2024, demonstrates a balanced performance profile, particularly excelling in cost-efficiency. It consistently offers among the most competitive pricing, ranking in the 98th percentile across six benchmarks. The model exhibits competitive response times, placing in the 56th percentile for speed. Reliability is strong, with an 89% success rate, indicating consistent provision of evaluable responses. In terms of specific benchmarks, Llama 3.1 8B Instruct shows notable strengths in Classification and Ethics, achieving 95.0% accuracy in Email Classification and 98.5% in Ethics, with both categories also being highly cost-effective. It is highlighted as the most accurate model at its price point for these tasks. However, the model shows weaknesses in more complex cognitive tasks such as Instruction Following (32.2% accuracy), Reasoning (46.0% accuracy), and General Knowledge (87.5% accuracy), where its accuracy percentiles are lower. While its Coding performance is moderate at 69.0% accuracy, its reasoning duration is notably slower than average. Overall, this model is a strong contender for cost-sensitive applications requiring high reliability and strong performance in classification and ethical reasoning.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.015 |
Completion | $0.02 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Kluster
|
Kluster | meta-llama/llama-3.1-8b-instruct | 131K | $0.015 / 1M tokens | $0.02 / 1M tokens |
DeepInfra
|
DeepInfra | meta-llama/llama-3.1-8b-instruct | 131K | $0.015 / 1M tokens | $0.02 / 1M tokens |
InferenceNet
|
InferenceNet | meta-llama/llama-3.1-8b-instruct | 16K | $0.02 / 1M tokens | $0.03 / 1M tokens |
Novita
|
Novita | meta-llama/llama-3.1-8b-instruct | 16K | $0.02 / 1M tokens | $0.05 / 1M tokens |
Nebius
|
Nebius | meta-llama/llama-3.1-8b-instruct | 131K | $0.02 / 1M tokens | $0.06 / 1M tokens |
Lambda
|
Lambda | meta-llama/llama-3.1-8b-instruct | 131K | $0.025 / 1M tokens | $0.04 / 1M tokens |
DeepInfra
|
DeepInfra | meta-llama/llama-3.1-8b-instruct | 131K | $0.03 / 1M tokens | $0.05 / 1M tokens |
Cloudflare
|
Cloudflare | meta-llama/llama-3.1-8b-instruct | 32K | $0.045 / 1M tokens | $0.384 / 1M tokens |
Groq
|
Groq | meta-llama/llama-3.1-8b-instruct | 131K | $0.05 / 1M tokens | $0.08 / 1M tokens |
Hyperbolic
|
Hyperbolic | meta-llama/llama-3.1-8b-instruct | 131K | $0.1 / 1M tokens | $0.1 / 1M tokens |
Cerebras
|
Cerebras | meta-llama/llama-3.1-8b-instruct | 32K | $0.1 / 1M tokens | $0.1 / 1M tokens |
Friendli
|
Friendli | meta-llama/llama-3.1-8b-instruct | 131K | $0.1 / 1M tokens | $0.1 / 1M tokens |
SambaNova
|
SambaNova | meta-llama/llama-3.1-8b-instruct | 16K | $0.1 / 1M tokens | $0.2 / 1M tokens |
Together
|
Together | meta-llama/llama-3.1-8b-instruct | 131K | $0.18 / 1M tokens | $0.18 / 1M tokens |
Fireworks
|
Fireworks | meta-llama/llama-3.1-8b-instruct | 131K | $0.2 / 1M tokens | $0.2 / 1M tokens |
Avian
|
Avian | meta-llama/llama-3.1-8b-instruct | 131K | $0.2 / 1M tokens | $0.2 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by meta-llama
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Meta: Llama Guard 4 12B | Apr 29, 2025 | 12B | 163K |
Text input
Image input
Text output
|
★★★★ | ★ | $ |
Meta: Llama 4 Maverick | Apr 05, 2025 | 17B | 1M |
Text input
Image input
Text output
|
★★★★ | ★★★ | $$$ |
Meta: Llama 4 Scout | Apr 05, 2025 | 17B | 1M |
Text input
Image input
Text output
|
★★★★ | ★★★ | $$ |
Llama Guard 3 8B | Feb 12, 2025 | 8B | 131K |
Text input
Text output
|
★ | ★ | $ |
Meta: Llama 3.3 70B Instruct | Dec 06, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★★★ | $ |
Meta: Llama 3.2 1B Instruct | Sep 24, 2024 | 1B | 131K |
Text input
Text output
|
★★ | ★ | $ |
Meta: Llama 3.2 3B Instruct | Sep 24, 2024 | 3B | 131K |
Text input
Text output
|
★★★ | ★ | $ |
Meta: Llama 3.2 11B Vision Instruct | Sep 24, 2024 | 11B | 128K |
Text input
Image input
Text output
|
★★★ | ★★ | $$ |
Meta: Llama 3.2 90B Vision Instruct | Sep 24, 2024 | 90B | 131K |
Text input
Image input
Text output
|
★★★ | ★★ | $$$$ |
Meta: Llama 3.1 405B (base) | Aug 01, 2024 | 405B | 32K |
Text input
Text output
|
★ | ★ | $$$$ |
Meta: Llama 3.1 70B Instruct | Jul 22, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★ | $$ |
Meta: Llama 3.1 405B Instruct | Jul 22, 2024 | 405B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$$ |
Meta: LlamaGuard 2 8B | May 12, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★ | $$ |
Meta: Llama 3 8B Instruct | Apr 17, 2024 | 8B | 8K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: Llama 3 70B Instruct | Apr 17, 2024 | 70B | 8K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 2 70B Chat Unavailable | Jun 19, 2023 | 70B | 4K |
Text input
Text output
|
— | — | $$$$ |