Meta: Llama 3.1 8B Instruct

Text input Text output
Author's Description

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 8B instruct-tuned version is fast and efficient. It has demonstrated strong performance compared to leading closed-source models in human evaluations. To read more about the model release, [click here](https://ai.meta.com/blog/meta-llama-3-1/). Usage of this model is subject to [Meta's Acceptable Use Policy](https://llama.meta.com/llama3/use-policy/).

Key Specifications
Cost
$
Context
131K
Parameters
8B
Released
Jul 22, 2024
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Stop Presence Penalty Logit Bias Top P Temperature Min P Seed Frequency Penalty Logprobs Max Tokens Top Logprobs
Performance Summary

Meta's Llama 3.1 8B Instruct model, released on July 22, 2024, demonstrates a balanced performance profile, particularly excelling in cost-efficiency. It consistently offers among the most competitive pricing, ranking in the 98th percentile across six benchmarks. The model exhibits competitive response times, placing in the 56th percentile for speed. Reliability is strong, with an 89% success rate, indicating consistent provision of evaluable responses. In terms of specific benchmarks, Llama 3.1 8B Instruct shows notable strengths in Classification and Ethics, achieving 95.0% accuracy in Email Classification and 98.5% in Ethics, with both categories also being highly cost-effective. It is highlighted as the most accurate model at its price point for these tasks. However, the model shows weaknesses in more complex cognitive tasks such as Instruction Following (32.2% accuracy), Reasoning (46.0% accuracy), and General Knowledge (87.5% accuracy), where its accuracy percentiles are lower. While its Coding performance is moderate at 69.0% accuracy, its reasoning duration is notably slower than average. Overall, this model is a strong contender for cost-sensitive applications requiring high reliability and strong performance in classification and ethical reasoning.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.015
Completion $0.02

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Kluster
Kluster | meta-llama/llama-3.1-8b-instruct 131K $0.015 / 1M tokens $0.02 / 1M tokens
DeepInfra
DeepInfra | meta-llama/llama-3.1-8b-instruct 131K $0.015 / 1M tokens $0.02 / 1M tokens
InferenceNet
InferenceNet | meta-llama/llama-3.1-8b-instruct 16K $0.02 / 1M tokens $0.03 / 1M tokens
Novita
Novita | meta-llama/llama-3.1-8b-instruct 16K $0.02 / 1M tokens $0.05 / 1M tokens
Nebius
Nebius | meta-llama/llama-3.1-8b-instruct 131K $0.02 / 1M tokens $0.06 / 1M tokens
Lambda
Lambda | meta-llama/llama-3.1-8b-instruct 131K $0.025 / 1M tokens $0.04 / 1M tokens
DeepInfra
DeepInfra | meta-llama/llama-3.1-8b-instruct 131K $0.03 / 1M tokens $0.05 / 1M tokens
Cloudflare
Cloudflare | meta-llama/llama-3.1-8b-instruct 32K $0.045 / 1M tokens $0.384 / 1M tokens
Groq
Groq | meta-llama/llama-3.1-8b-instruct 131K $0.05 / 1M tokens $0.08 / 1M tokens
Hyperbolic
Hyperbolic | meta-llama/llama-3.1-8b-instruct 131K $0.1 / 1M tokens $0.1 / 1M tokens
Cerebras
Cerebras | meta-llama/llama-3.1-8b-instruct 32K $0.1 / 1M tokens $0.1 / 1M tokens
Friendli
Friendli | meta-llama/llama-3.1-8b-instruct 131K $0.1 / 1M tokens $0.1 / 1M tokens
SambaNova
SambaNova | meta-llama/llama-3.1-8b-instruct 16K $0.1 / 1M tokens $0.2 / 1M tokens
Together
Together | meta-llama/llama-3.1-8b-instruct 131K $0.18 / 1M tokens $0.18 / 1M tokens
Fireworks
Fireworks | meta-llama/llama-3.1-8b-instruct 131K $0.2 / 1M tokens $0.2 / 1M tokens
Avian
Avian | meta-llama/llama-3.1-8b-instruct 131K $0.2 / 1M tokens $0.2 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by meta-llama