Author's Description
Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Meta's Llama 3.1 70B Instruct model demonstrates a balanced performance profile, excelling in certain areas while showing room for improvement in others. It performs among the fastest models, typically ranking in the top tier for speed (64th percentile across benchmarks). Furthermore, it consistently offers highly competitive pricing, placing it among the most cost-effective options (82nd percentile). In terms of specific benchmarks, the model achieves perfect accuracy in Ethics, making it the most accurate model at its price point and among models of comparable speed. It also shows strong performance in Hallucinations (94.0% accuracy), General Knowledge (97.8% accuracy), and Email Classification (98.0% accuracy), indicating robust capabilities in understanding and processing information. However, the model exhibits notable weaknesses in more complex, specialized domains. Its performance in Mathematics (24.0% accuracy) and Coding (2.0% accuracy) is significantly lower, ranking in the bottom percentiles. Reasoning (52.0% accuracy) and Instruction Following (54.0% accuracy) show moderate performance. Overall, Llama 3.1 70B Instruct is a strong contender for dialogue use cases requiring ethical reasoning and general knowledge, but may require further development for highly technical or abstract problem-solving tasks.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.4 |
Completion | $0.4 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
DeepInfra
|
DeepInfra | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
Lambda
|
Lambda | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
Nebius
|
Nebius | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
DeepInfra
|
DeepInfra | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
InferenceNet
|
InferenceNet | meta-llama/llama-3.1-70b-instruct | 16K | $0.4 / 1M tokens | $0.4 / 1M tokens |
Hyperbolic
|
Hyperbolic | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
Together
|
Together | meta-llama/llama-3.1-70b-instruct | 131K | $0.88 / 1M tokens | $0.88 / 1M tokens |
Fireworks
|
Fireworks | meta-llama/llama-3.1-70b-instruct | 131K | $0.9 / 1M tokens | $0.9 / 1M tokens |
NextBit
|
NextBit | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
Phala
|
Phala | meta-llama/llama-3.1-70b-instruct | 131K | $0.4 / 1M tokens | $0.4 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by meta-llama
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Meta: Llama Guard 4 12B | Apr 29, 2025 | 12B | 163K |
Text input
Image input
Text output
|
— | ★ | $$ |
Meta: Llama 4 Maverick | Apr 05, 2025 | 17B | 1M |
Text input
Image input
Text output
|
★★★★★ | ★★★ | $$$ |
Meta: Llama 4 Scout | Apr 05, 2025 | 17B | 327K |
Text input
Image input
Text output
|
★★★★ | ★★ | $$ |
Llama Guard 3 8B | Feb 12, 2025 | 8B | 131K |
Text input
Text output
|
★★ | ★ | $$ |
Meta: Llama 3.3 70B Instruct | Dec 06, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★★ | $ |
Meta: Llama 3.2 1B Instruct | Sep 24, 2024 | 1B | 131K |
Text input
Text output
|
★★ | ★ | $ |
Meta: Llama 3.2 3B Instruct | Sep 24, 2024 | 3B | 131K |
Text input
Text output
|
★★★ | ★ | $ |
Meta: Llama 3.2 11B Vision Instruct | Sep 24, 2024 | 11B | 128K |
Text input
Image input
Text output
|
★★ | ★★ | $$ |
Meta: Llama 3.2 90B Vision Instruct | Sep 24, 2024 | 90B | 131K |
Text input
Image input
Text output
|
★★★ | ★★ | $$$$ |
Meta: Llama 3.1 405B (base) | Aug 01, 2024 | 405B | 32K |
Text input
Text output
|
★ | ★ | $$$ |
Meta: Llama 3.1 405B Instruct | Jul 22, 2024 | 405B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 3.1 8B Instruct | Jul 22, 2024 | 8B | 131K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: LlamaGuard 2 8B | May 12, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★ | $$ |
Meta: Llama 3 8B Instruct | Apr 17, 2024 | 8B | 8K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: Llama 3 70B Instruct | Apr 17, 2024 | 70B | 8K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 2 70B Chat Unavailable | Jun 19, 2023 | 70B | 4K |
Text input
Text output
|
— | — | $$$$ |