Typhoon2 70B Instruct

Text input Text output
Author's Description

Llama3.1-Typhoon2-70B-Instruct is a Thai-English instruction-tuned language model with 70 billion parameters, built on Llama 3.1. It demonstrates strong performance across general instruction-following, math, coding, and tool-use tasks, with state-of-the-art results in Thai-specific benchmarks such as IFEval, MT-Bench, and Thai-English code-switching. The model excels in bilingual reasoning and function-calling scenarios, offering high accuracy across diverse domains. Comparative evaluations show consistent improvements over prior Thai LLMs and other Llama-based baselines. Full results and methodology are available in the [technical report.](https://arxiv.org/abs/2412.13702)

Key Specifications
Cost
$$$$
Context
8K
Parameters
70B
Released
Mar 28, 2025
Ability
Supported Parameters

This model supports the following parameters:

Stop Presence Penalty Logit Bias Top P Temperature Min P Frequency Penalty Max Tokens
Performance Summary

Typhoon2 70B Instruct, provided by scb10x, is a 70-billion parameter Thai-English instruction-tuned language model built on Llama 3.1, created on March 28, 2025. It consistently performs among the fastest models, ranking in the Infinityth percentile across four benchmarks, indicating exceptional speed. Price data is unavailable, suggesting potential free tier usage. The model demonstrates exceptional reliability, achieving the 100th percentile across four benchmarks, meaning it consistently provides usable responses with minimal technical failures. Despite its strong foundational capabilities and high reliability, the model exhibited 0.0% accuracy across all evaluated baseline benchmarks, including Email Classification, Reasoning, Ethics, and General Knowledge. This indicates a significant weakness in accurately performing these specific tasks, despite its reported strengths in general instruction-following, math, coding, tool-use, and state-of-the-art results in Thai-specific benchmarks like IFEval and MT-Bench. Its core strengths lie in bilingual reasoning and function-calling scenarios, with comparative evaluations showing improvements over prior Thai LLMs and other Llama-based baselines, as detailed in its technical report. The current benchmark results suggest a need for further evaluation or fine-tuning for general English-centric baseline tasks.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $0.88
Completion $0.88

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Together
Together | scb10x/llama3.1-typhoon2-70b-instruct 8K $0.88 / 1M tokens $0.88 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Free Executions Accuracy Cost Duration