Perplexity: R1 1776

Text input Text output Unavailable
Author's Description

R1 1776 is a version of DeepSeek-R1 that has been post-trained to remove censorship constraints related to topics restricted by the Chinese government. The model retains its original reasoning capabilities while providing direct responses to a wider range of queries. R1 1776 is an offline chat model that does not use the perplexity search subsystem. The model was tested on a multilingual dataset of over 1,000 examples covering sensitive topics to measure its likelihood of refusal or overly filtered responses. [Evaluation Results](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/GiN2VqC5hawUgAGJ6oHla.png) Its performance on math and reasoning benchmarks remains similar to the base R1 model. [Reasoning Performance](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/n4Z9Byqp2S7sKUvCvI40R.png) Read more on the [Blog Post](https://perplexity.ai/hub/blog/open-sourcing-r1-1776)

Key Specifications
Cost
$$$$$
Context
128K
Released
Feb 19, 2025
Speed
Ability
Reliability
Supported Parameters

This model supports the following parameters:

Include Reasoning Max Tokens Top P Frequency Penalty Reasoning Temperature Presence Penalty
Features

This model supports the following features:

Reasoning
Performance Summary

Perplexity: R1 1776, a post-trained version of DeepSeek-R1, demonstrates strong performance across several key areas, particularly in its ability to provide direct responses to a wider range of queries by removing certain censorship constraints. The model exhibits exceptional reliability, achieving a 100% success rate across all benchmarks, indicating consistent and usable output. However, it tends to have longer response times, ranking in the 19th percentile for speed, and is positioned at premium pricing levels, falling into the 5th percentile for cost-effectiveness. In terms of specific benchmarks, R1 1776 excels in General Knowledge and Ethics, achieving perfect 100% accuracy in both categories, and is noted as the most accurate model at its price point and speed for these tasks. It also performs very strongly in Coding (96% accuracy) and Instruction Following (84% accuracy), placing it in the top percentiles for these capabilities. Its Email Classification accuracy is 97%, though this places it in the 43rd percentile, suggesting more competitive performance exists in this specific area. The model's core reasoning capabilities are retained from the base R1 model, ensuring high-quality analytical output. Its primary strength lies in its uncensored responses and high accuracy in knowledge-based and ethical reasoning tasks, while its main weaknesses are its slower response times and higher cost.

Model Pricing

Current Pricing

Feature Price (per 1M tokens)
Prompt $2
Completion $8

Price History

Available Endpoints
Provider Endpoint Name Context Length Pricing (Input) Pricing (Output)
Perplexity
Perplexity | perplexity/r1-1776 128K $2 / 1M tokens $8 / 1M tokens
Together
Together | perplexity/r1-1776 163K $2 / 1M tokens $8 / 1M tokens
Benchmark Results
Benchmark Category Reasoning Strategy Free Executions Accuracy Cost Duration
Other Models by perplexity