Mistral: Mistral Small 3.1 24B

Text input Image input Text output
Author's Description

Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated version is [Mistral Small 3.2](mistralai/mistral-small-3.2-24b-instruct)

Key Specifications
Cost
$$
Context
131K
Parameters
24B
Released
Mar 17, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Top Logprobs Logit Bias Logprobs Stop Seed Top P Max Tokens Frequency Penalty Temperature Presence Penalty
Performance Summary

Mistral Small 3.1 24B demonstrates a strong overall performance profile, particularly excelling in cost-efficiency and reliability. The model consistently offers among the most competitive pricing, ranking in the 89th percentile across benchmarks, and exhibits exceptional reliability with a 100% success rate, indicating minimal technical failures. While its speed generally places it in the top tier (66th percentile), specific benchmark durations vary. In terms of capabilities, Mistral Small 3.1 24B shows notable strengths in acknowledging uncertainty, achieving 98.0% accuracy in Hallucinations (Baseline) and being the most accurate model at its price point for this category. It also performs well in General Knowledge (98.7% accuracy) and Ethics (99.0% accuracy). Mathematical reasoning is another highlight, with 86.0% accuracy and again, being the most accurate model at its price point. However, the model shows relative weaknesses in Instruction Following (54.3% accuracy) and Reasoning (64.0% accuracy), suggesting areas for potential improvement in handling complex, multi-step directives and abstract problem-solving. Its performance in Email Classification (96.0% accuracy) and Coding (81.0% accuracy) is solid but not top-tier.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.04
Completion $0.15

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Nebius
Nebius | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.04 / 1M tokens $0.15 / 1M tokens
Parasail
Parasail | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.04 / 1M tokens $0.15 / 1M tokens
Mistral
Mistral | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.1 / 1M tokens $0.3 / 1M tokens
Cloudflare
Cloudflare | mistralai/mistral-small-3.1-24b-instruct-2503 128K $0.35 / 1M tokens $0.56 / 1M tokens
DeepInfra
DeepInfra | mistralai/mistral-small-3.1-24b-instruct-2503 128K $0.05 / 1M tokens $0.1 / 1M tokens
Chutes
Chutes | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.04 / 1M tokens $0.15 / 1M tokens
Chutes
Chutes | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.04 / 1M tokens $0.15 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by mistralai