MoonshotAI: Kimi K2 0905 (exacto)

Text input Text output
Author's Description

Kimi K2 0905 is the September update of [Kimi K2 0711](moonshotai/kimi-k2). It is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI, featuring 1 trillion total parameters with 32 billion active per forward pass. It supports long-context inference up to 256k tokens, extended from the previous 128k. This update improves agentic coding with higher accuracy and better generalization across scaffolds, and enhances frontend coding with more aesthetic and functional outputs for web, 3D, and related tasks. Kimi K2 is optimized for agentic capabilities, including advanced tool use, reasoning, and code synthesis. It excels across coding (LiveCodeBench, SWE-bench), reasoning (ZebraLogic, GPQA), and tool-use (Tau2, AceBench) benchmarks. The model is trained with a novel stack incorporating the MuonClip optimizer for stable large-scale MoE training.

Key Specifications
Cost
$$$$$
Context
262K
Parameters
1T (Rumoured)
Released
Sep 04, 2025
Supported Parameters

This model supports the following parameters:

Stop Structured Outputs Response Format Presence Penalty Frequency Penalty Temperature Top P Max Tokens Tool Choice Tools
Features

This model supports the following features:

Response Format Tools Structured Outputs
Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.6
Completion $2.5

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Moonshot AI
Moonshot AI | moonshotai/kimi-k2-0905:exacto 262K $0.6 / 1M tokens $2.5 / 1M tokens
Groq
Groq | moonshotai/kimi-k2-0905:exacto 262K $1 / 1M tokens $3 / 1M tokens
Moonshot AI
Moonshot AI | moonshotai/kimi-k2-0905:exacto 262K $2.4 / 1M tokens $10 / 1M tokens
Other Models by moonshotai