Qwen: Qwen2.5 Coder 7B Instruct

Text input Text output
Author's Description

Qwen2.5-Coder-7B-Instruct is a 7B parameter instruction-tuned language model optimized for code-related tasks such as code generation, reasoning, and bug fixing. Based on the Qwen2.5 architecture, it incorporates enhancements like RoPE, SwiGLU, RMSNorm, and GQA attention with support for up to 128K tokens using YaRN-based extrapolation. It is trained on a large corpus of source code, synthetic data, and text-code grounding, providing robust performance across programming languages and agentic coding workflows. This model is part of the Qwen2.5-Coder family and offers strong compatibility with tools like vLLM for efficient deployment. Released under the Apache 2.0 license.

Key Specifications
Cost
$
Context
32K
Parameters
7B
Released
Apr 15, 2025
Supported Parameters

This model supports the following parameters:

Max Tokens Top P Structured Outputs Frequency Penalty Presence Penalty Temperature Response Format
Features

This model supports the following features:

Structured Outputs Response Format
Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.03
Completion $0.09

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Nebius
Nebius | qwen/qwen2.5-coder-7b-instruct 32K $0.03 / 1M tokens $0.09 / 1M tokens
Other Models by qwen