Mistral: Mistral Small 3.1 24B

Text input Image input Text output
Author's Description

Mistral Small 3.1 24B Instruct is an upgraded variant of Mistral Small 3 (2501), featuring 24 billion parameters with advanced multimodal capabilities. It provides state-of-the-art performance in text-based reasoning and vision tasks, including image analysis, programming, mathematical reasoning, and multilingual support across dozens of languages. Equipped with an extensive 128k token context window and optimized for efficient local inference, it supports use cases such as conversational agents, function calling, long-document comprehension, and privacy-sensitive deployments. The updated version is [Mistral Small 3.2](mistralai/mistral-small-3.2-24b-instruct)

Key Specifications
Cost
$$
Context
131K
Parameters
24B
Released
Mar 17, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Stop Presence Penalty Logit Bias Top P Temperature Seed Frequency Penalty Logprobs Max Tokens Top Logprobs
Performance Summary

Mistral Small 3.1 24B, created by mistralai, demonstrates a strong overall performance profile. It consistently performs in the top tier for speed, ranking in the 68th percentile across benchmarks, indicating efficient processing. Furthermore, its pricing is highly competitive, placing it in the 87th percentile, making it a cost-effective solution. The model exhibits exceptional reliability with a perfect 100% success rate across all evaluated benchmarks, ensuring consistent and usable responses. In terms of specific benchmark performance, Mistral Small 3.1 24B shows particular strength in Ethics and General Knowledge, achieving 99.0% and 98.7% accuracy respectively, placing it in the 65th and 67th percentiles for these categories. Its Reasoning capabilities are also solid at 62.0% accuracy (78th percentile for duration), and it handles Coding tasks with 81.0% accuracy. While its Instruction Following (54.3%) and Email Classification (96.0%) accuracies are moderate, its cost-efficiency in these areas is notable. The model's 128k token context window and optimization for local inference further enhance its utility for diverse applications.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.05
Completion $0.15

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Nebius
Nebius | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.05 / 1M tokens $0.15 / 1M tokens
Parasail
Parasail | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.018 / 1M tokens $0.072 / 1M tokens
Mistral
Mistral | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.1 / 1M tokens $0.3 / 1M tokens
Cloudflare
Cloudflare | mistralai/mistral-small-3.1-24b-instruct-2503 128K $0.35 / 1M tokens $0.56 / 1M tokens
DeepInfra
DeepInfra | mistralai/mistral-small-3.1-24b-instruct-2503 128K $0.05 / 1M tokens $0.1 / 1M tokens
Chutes
Chutes | mistralai/mistral-small-3.1-24b-instruct-2503 131K $0.018 / 1M tokens $0.072 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by mistralai