Author's Description
R1 1776 is a version of DeepSeek-R1 that has been post-trained to remove censorship constraints related to topics restricted by the Chinese government. The model retains its original reasoning capabilities while providing direct responses to a wider range of queries. R1 1776 is an offline chat model that does not use the perplexity search subsystem. The model was tested on a multilingual dataset of over 1,000 examples covering sensitive topics to measure its likelihood of refusal or overly filtered responses. [Evaluation Results](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/GiN2VqC5hawUgAGJ6oHla.png) Its performance on math and reasoning benchmarks remains similar to the base R1 model. [Reasoning Performance](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/n4Z9Byqp2S7sKUvCvI40R.png) Read more on the [Blog Post](https://perplexity.ai/hub/blog/open-sourcing-r1-1776)
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Perplexity's R1 1776 model demonstrates a unique profile, primarily distinguished by its post-training to remove Chinese government censorship constraints, enabling direct responses to a broader range of sensitive queries while retaining its original DeepSeek-R1 reasoning capabilities. This offline chat model does not leverage Perplexity's search subsystem. In terms of performance, R1 1776 tends to exhibit longer response times, ranking in the 19th percentile for speed across various benchmarks. Its pricing is positioned at premium levels, falling into the 6th percentile for cost competitiveness. Despite its slower speed and higher cost, the model showcases exceptional accuracy across several critical domains. It achieved perfect accuracy in both Ethics and General Knowledge benchmarks, often being the most accurate model at its price point and among models of comparable speed. Its performance in Coding is also outstanding, ranking in the top 3 for accuracy at 96.0%. Reasoning capabilities are strong, with 98.0% accuracy. The model maintains a solid 97.0% accuracy in Email Classification. Overall, R1 1776's key strength lies in its high accuracy across diverse and challenging benchmarks, particularly in ethics, general knowledge, and coding, coupled with its uncensored response capabilities. Its primary weaknesses are its slower processing speed and premium pricing.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $2 |
Completion | $8 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Perplexity
|
Perplexity | perplexity/r1-1776 | 128K | $2 / 1M tokens | $8 / 1M tokens |
Together
|
Together | perplexity/r1-1776 | 163K | $2 / 1M tokens | $8 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by perplexity
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Perplexity: Sonar Reasoning Pro | Mar 06, 2025 | — | 128K |
Image input
Text input
Text output
|
★ | ★★★★★ | $$$$$ |
Perplexity: Sonar Pro | Mar 06, 2025 | — | 200K |
Image input
Text input
Text output
|
★★★ | ★★★★★ | $$$$$ |
Perplexity: Sonar Deep Research | Mar 06, 2025 | — | 128K |
Text input
Text output
|
— | — | $$$$$ |
Perplexity: Sonar Reasoning | Jan 28, 2025 | — | 127K |
Text input
Text output
|
★ | ★★★★ | $$$$$ |
Perplexity: Sonar | Jan 27, 2025 | — | 127K |
Image input
Text input
Text output
|
★★★ | ★★★ | $$$$ |
Perplexity: Llama 3.1 Sonar 70B Online Unavailable | Jul 31, 2024 | 70B | 127K |
Text input
Text output
|
★★ | ★ | $$$$ |
Perplexity: Llama 3.1 Sonar 8B Online Unavailable | Jul 31, 2024 | 8B | 127K |
Text input
Text output
|
★★ | ★ | $$ |