Meta: Llama 3.1 70B Instruct

Text input Text output
Author's Description

Meta's latest class of model (Llama 3.1) launched with a variety of sizes & flavors. This 70B instruct-tuned version is optimized for high quality dialogue usecases. It has demonstrated strong...

Key Specifications
Cost
$$
Context
131K
Parameters
70B
Released
Jul 22, 2024
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Seed Tools Frequency Penalty Top P Min P Response Format Temperature Stop Presence Penalty Tool Choice Max Tokens
Features

This model supports the following features:

Response Format Tools
Performance Summary

Meta's Llama 3.1 70B Instruct model demonstrates a strong overall profile, particularly excelling in cost-efficiency and maintaining competitive speed. It performs among the fastest models, ranking in the 65th percentile for speed across benchmarks, and consistently offers among the most competitive pricing, placing in the 83rd percentile. In terms of specific performance, the model shows exceptional capabilities in Ethics, achieving perfect 100% accuracy, making it the most accurate model at its price point and among models of similar speed. It also performs very well in Email Classification (98.0% accuracy) and effectively acknowledges uncertainty in Hallucinations (94.0% accuracy). General Knowledge is solid at 97.8%. However, the model exhibits significant weaknesses in more complex cognitive tasks. Its performance in Mathematics (24.0% accuracy), Instruction Following (54.0% accuracy), Reasoning (52.0% accuracy), and especially Coding (2.0% accuracy) is notably low, indicating areas for substantial improvement. This suggests the model is highly optimized for dialogue and ethical considerations but struggles with precise, multi-step logical and computational challenges.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.4
Completion $0.4

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
DeepInfra
DeepInfra | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
Lambda
Lambda | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
Nebius
Nebius | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
DeepInfra
DeepInfra | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
InferenceNet
InferenceNet | meta-llama/llama-3.1-70b-instruct 16K $0.4 / 1M tokens $0.4 / 1M tokens
Hyperbolic
Hyperbolic | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
Together
Together | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
Fireworks
Fireworks | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
NextBit
NextBit | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
Phala
Phala | meta-llama/llama-3.1-70b-instruct 131K $0.4 / 1M tokens $0.4 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by meta-llama