inclusionAI: Ling-1T

Text input Text output
Author's Description

Ling-1T is a trillion-parameter open-weight large language model developed by inclusionAI and released under the MIT license. It represents the first flagship non-thinking model in the Ling 2.0 series, built around a sparse-activation architecture with roughly 50 billion active parameters per token. The model supports up to 128 K tokens of context and emphasizes efficient reasoning through an “Evolutionary Chain-of-Thought (Evo-CoT)” training strategy. Pre-trained on more than 20 trillion reasoning-dense tokens, Ling-1T achieves strong results across code generation, mathematics, and logical reasoning benchmarks while maintaining high inference efficiency. It employs FP8 mixed-precision training, MoE routing with QK normalization, and MTP layers for compositional reasoning stability. The model also introduces LPO (Linguistics-unit Policy Optimization) for post-training alignment, enhancing sentence-level semantic control. Ling-1T can perform complex text generation, multilingual reasoning, and front-end code synthesis with a focus on both functionality and aesthetics.

Key Specifications
Context
131K
Parameters
1T
Released
Oct 12, 2025
Supported Parameters

This model supports the following parameters:

Max Tokens Structured Outputs Logit Bias Presence Penalty Stop Tools Seed Tool Choice Top P Frequency Penalty Min P Logprobs Temperature Top Logprobs
Features

This model supports the following features:

Structured Outputs Tools
Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1
Completion $3

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Chutes
Chutes | inclusionai/ling-1t 131K $1 / 1M tokens $3 / 1M tokens