Z.ai: GLM 5

Text input Text output
Author's Description

GLM-5 is Z.ai’s flagship open-source foundation model engineered for complex systems design and long-horizon agent workflows. Built for expert developers, it delivers production-grade performance on large-scale programming tasks, rivaling leading closed-source models. With advanced agentic planning, deep backend reasoning, and iterative self-correction, GLM-5 moves beyond code generation to full-system construction and autonomous execution.

Key Specifications
Cost
$$$$$
Context
202K
Released
Feb 11, 2026
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Response Format Tools Reasoning Top P Max Tokens Include Reasoning Structured Outputs Tool Choice Temperature
Features

This model supports the following features:

Reasoning Tools Structured Outputs Response Format
Performance Summary

Z.ai's GLM-5, released on February 11, 2026, is positioned as a flagship open-source foundation model for complex systems design and long-horizon agent workflows, targeting expert developers. With a substantial context length of 202800, it aims for production-grade performance. The model exhibits exceptional reliability, boasting a 97% success rate across benchmarks, indicating consistent operational stability. However, GLM-5 tends to have longer response times, ranking in the 7th percentile for speed, and is positioned at premium pricing levels, ranking in the 11th percentile for cost. In terms of performance across categories, GLM-5 demonstrates significant strengths in Coding (95.8% accuracy, 95th percentile) and Instruction Following (77.0% accuracy, 86th percentile), aligning with its design for complex programming tasks. Its Reasoning capabilities are also strong at 93.6% accuracy (81st percentile). General Knowledge is robust at 98.9% accuracy, though its percentile ranking is moderate (62nd). A notable weakness is observed in its handling of Hallucinations, with an 80.0% accuracy (25th percentile), suggesting room for improvement in acknowledging uncertainty. Email Classification and Ethics benchmarks show solid, albeit not top-tier, performance at 98.0% and 99.0% accuracy respectively. The model's high cost and duration across several benchmarks, particularly Hallucinations, General Knowledge, and Coding, underscore its slower processing and premium pricing.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $1
Completion $3.2
Input Cache Read $0.2

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
AtlasCloud
AtlasCloud | z-ai/glm-5-20260211 202K $1 / 1M tokens $3.2 / 1M tokens
Novita
Novita | z-ai/glm-5-20260211 202K $1 / 1M tokens $3.2 / 1M tokens
Z.AI
Z.AI | z-ai/glm-5-20260211 202K $1 / 1M tokens $3.2 / 1M tokens
Phala
Phala | z-ai/glm-5-20260211 202K $1.2 / 1M tokens $3.5 / 1M tokens
GMICloud
GMICloud | z-ai/glm-5-20260211 202K $1 / 1M tokens $3.2 / 1M tokens
Parasail
Parasail | z-ai/glm-5-20260211 202K $1 / 1M tokens $3.2 / 1M tokens
Friendli
Friendli | z-ai/glm-5-20260211 202K $1 / 1M tokens $3.2 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by z-ai