Mistral: Mistral 7B Instruct v0.1

Text input Text output
Author's Description

A 7.3B parameter model that outperforms Llama 2 13B on all benchmarks, with optimizations for speed and context length.

Key Specifications
Cost
$$
Context
2K
Parameters
7B
Released
Sep 27, 2023
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Presence Penalty Top P Temperature Seed Frequency Penalty Max Tokens
Performance Summary

Mistral: Mistral 7B Instruct v0.1, a 7.3B parameter model, consistently ranks among the fastest models available and offers highly competitive pricing across all evaluated benchmarks. While specific reliability data is not provided, the model's consistent performance across various tests suggests a generally reliable operation. In terms of performance across categories, the model demonstrates notable strengths in Ethics, achieving 97.0% accuracy, and exhibits solid reasoning capabilities with 63.3% accuracy. Its General Knowledge performance is fair at 79.8%. However, the model shows significant weaknesses in Coding and Instruction Following, with accuracy rates of 1.0% and 0.0% respectively, indicating these are areas requiring substantial improvement. Email Classification also presents a weakness at 77.0% accuracy, placing it in the 8th percentile. Despite its speed and cost advantages, the model's accuracy varies widely, excelling in some cognitive tasks while struggling with others that demand precise execution or deep domain-specific knowledge.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.11
Completion $0.19

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Cloudflare
Cloudflare | mistralai/mistral-7b-instruct-v0.1 2K $0.11 / 1M tokens $0.19 / 1M tokens
Together
Together | mistralai/mistral-7b-instruct-v0.1 32K $0.2 / 1M tokens $0.2 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration
Other Models by mistralai