inclusionAI: Ling-1T

Text input Text output Unavailable
Author's Description

Ling-1T is a trillion-parameter open-weight large language model developed by inclusionAI and released under the MIT license. It represents the first flagship non-thinking model in the Ling 2.0 series, built around a sparse-activation architecture with roughly 50 billion active parameters per token. The model supports up to 128 K tokens of context and emphasizes efficient reasoning through an “Evolutionary Chain-of-Thought (Evo-CoT)” training strategy. Pre-trained on more than 20 trillion reasoning-dense tokens, Ling-1T achieves strong results across code generation, mathematics, and logical reasoning benchmarks while maintaining high inference efficiency. It employs FP8 mixed-precision training, MoE routing with QK normalization, and MTP layers for compositional reasoning stability. The model also introduces LPO (Linguistics-unit Policy Optimization) for post-training alignment, enhancing sentence-level semantic control. Ling-1T can perform complex text generation, multilingual reasoning, and front-end code synthesis with a focus on both functionality and aesthetics.

Key Specifications
Cost
$$$$
Context
131K
Parameters
1T
Released
Oct 12, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Stop Max Tokens Temperature Top P Response Format Frequency Penalty Presence Penalty Tools Seed Tool Choice
Features

This model supports the following features:

Tools Response Format
Performance Summary

Ling-1T demonstrates moderate speed performance, ranking in the 37th percentile, and offers competitive pricing, placing in the 44th percentile across benchmarks. Its reliability is exceptional, boasting a 99% success rate, indicating consistent and stable operation. The model exhibits outstanding accuracy in specific areas. It achieved perfect scores in Hallucinations (100.0%), effectively acknowledging uncertainty, and Ethics (100.0%), demonstrating a strong grasp of ethical principles. Ling-1T also excels in Mathematics (96.0% accuracy, 98th percentile), showcasing its advanced reasoning capabilities in this domain. Strong performance was also observed in General Knowledge (99.0% accuracy, 68th percentile), Email Classification (99.0% accuracy, 85th percentile), and Reasoning (86.0% accuracy, 75th percentile). While its Coding performance (89.0% accuracy, 69th percentile) is solid, it is notably slower in this category. The primary area for improvement appears to be Instruction Following, where it achieved 63.0% accuracy (70th percentile). Overall, Ling-1T is a highly reliable model with significant strengths in reasoning, ethical understanding, and mathematical problem-solving, making it a robust choice for complex tasks.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.57
Completion $2.28

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Chutes
Chutes | inclusionai/ling-1t 131K $0.57 / 1M tokens $2.28 / 1M tokens
SiliconFlow
SiliconFlow | inclusionai/ling-1t 131K $0.57 / 1M tokens $2.28 / 1M tokens
SiliconFlow
SiliconFlow | inclusionai/ling-1t 131K $0.57 / 1M tokens $2.28 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by inclusionai