Author's Description
Llama 3.2 1B is a 1-billion-parameter language model focused on efficiently performing natural language tasks, such as summarization, dialogue, and multilingual text analysis. Its smaller size allows it to operate efficiently in low-resource environments while maintaining strong task performance. Supporting eight core languages and fine-tunable for more, Llama 1.3B is ideal for businesses or developers seeking lightweight yet powerful AI solutions that can operate in diverse multilingual settings without the high computational demand of larger models. Click here for the [original model card](https://github.com/meta-llama/llama-models/blob/main/models/llama3_2/MODEL_CARD.md). Usage of this model is subject to [Meta's Acceptable Use Policy](https://www.llama.com/llama3/use-policy/).
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Meta's Llama 3.2 1B Instruct, a 1-billion-parameter language model, demonstrates exceptional efficiency in both speed and cost. It consistently ranks among the fastest models and offers highly competitive pricing across all evaluated benchmarks. Designed for low-resource environments and multilingual tasks, its performance profile reflects this focus. While excelling in efficiency, the model exhibits significant limitations in core cognitive tasks. It achieved 0.0% accuracy in General Knowledge, Ethics, Mathematics, and Coding, indicating a lack of foundational understanding in these complex domains. Its performance in Instruction Following (18.9% accuracy) and Reasoning (22.0% accuracy) is also notably low, placing it in the lower percentiles for these categories. The model's strongest performance was observed in Email Classification, where it achieved 32.0% accuracy, placing it in the 6th percentile, still indicating room for improvement. Its reliability is not explicitly provided but its ability to complete benchmarks suggests a baseline level of operational stability. Overall, Llama 3.2 1B Instruct is a highly efficient and cost-effective solution for basic natural language tasks, particularly where speed and budget are paramount, but it is not suited for tasks requiring deep understanding, complex reasoning, or accurate knowledge recall.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $0.005 |
Completion | $0.01 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
DeepInfra
|
DeepInfra | meta-llama/llama-3.2-1b-instruct | 131K | $0.005 / 1M tokens | $0.01 / 1M tokens |
InferenceNet
|
InferenceNet | meta-llama/llama-3.2-1b-instruct | 16K | $0.01 / 1M tokens | $0.01 / 1M tokens |
Cloudflare
|
Cloudflare | meta-llama/llama-3.2-1b-instruct | 60K | $0.027 / 1M tokens | $0.2 / 1M tokens |
SambaNova
|
SambaNova | meta-llama/llama-3.2-1b-instruct | 16K | $0.005 / 1M tokens | $0.01 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Strategy | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by meta-llama
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Meta: Llama Guard 4 12B | Apr 29, 2025 | 12B | 163K |
Text input
Image input
Text output
|
— | ★ | $$ |
Meta: Llama 4 Maverick | Apr 05, 2025 | 17B | 1M |
Text input
Image input
Text output
|
★★★★★ | ★★★ | $$$ |
Meta: Llama 4 Scout | Apr 05, 2025 | 17B | 327K |
Text input
Image input
Text output
|
★★★★ | ★★ | $$ |
Llama Guard 3 8B | Feb 12, 2025 | 8B | 131K |
Text input
Text output
|
★★ | ★ | $$ |
Meta: Llama 3.3 70B Instruct | Dec 06, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★★★ | $ |
Meta: Llama 3.2 3B Instruct | Sep 24, 2024 | 3B | 131K |
Text input
Text output
|
★★★ | ★ | $ |
Meta: Llama 3.2 11B Vision Instruct | Sep 24, 2024 | 11B | 128K |
Text input
Image input
Text output
|
★★ | ★★ | $$ |
Meta: Llama 3.2 90B Vision Instruct | Sep 24, 2024 | 90B | 131K |
Text input
Image input
Text output
|
★★★ | ★★ | $$$$ |
Meta: Llama 3.1 405B (base) | Aug 01, 2024 | 405B | 32K |
Text input
Text output
|
★ | ★ | $$$ |
Meta: Llama 3.1 70B Instruct | Jul 22, 2024 | 70B | 131K |
Text input
Text output
|
★★★★ | ★★ | $$ |
Meta: Llama 3.1 405B Instruct | Jul 22, 2024 | 405B | 32K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 3.1 8B Instruct | Jul 22, 2024 | 8B | 131K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: LlamaGuard 2 8B | May 12, 2024 | 8B | 8K |
Text input
Text output
|
★★★★ | ★ | $$ |
Meta: Llama 3 8B Instruct | Apr 17, 2024 | 8B | 8K |
Text input
Text output
|
★★★ | ★★ | $ |
Meta: Llama 3 70B Instruct | Apr 17, 2024 | 70B | 8K |
Text input
Text output
|
★★★★ | ★★ | $$$ |
Meta: Llama 2 70B Chat Unavailable | Jun 19, 2023 | 70B | 4K |
Text input
Text output
|
— | — | $$$$ |