Author's Description
R1 1776 is a version of DeepSeek-R1 that has been post-trained to remove censorship constraints related to topics restricted by the Chinese government. The model retains its original reasoning capabilities while providing direct responses to a wider range of queries. R1 1776 is an offline chat model that does not use the perplexity search subsystem. The model was tested on a multilingual dataset of over 1,000 examples covering sensitive topics to measure its likelihood of refusal or overly filtered responses. [Evaluation Results](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/GiN2VqC5hawUgAGJ6oHla.png) Its performance on math and reasoning benchmarks remains similar to the base R1 model. [Reasoning Performance](https://cdn-uploads.huggingface.co/production/uploads/675c8332d01f593dc90817f5/n4Z9Byqp2S7sKUvCvI40R.png) Read more on the [Blog Post](https://perplexity.ai/hub/blog/open-sourcing-r1-1776)
Key Specifications
Supported Parameters
This model supports the following parameters:
Features
This model supports the following features:
Performance Summary
Perplexity's R1 1776 model demonstrates moderate speed performance, ranking in the 20th percentile across various benchmarks. Its pricing is positioned at premium levels, falling within the 6th percentile for cost competitiveness. A standout feature is its exceptional reliability, achieving the 100th percentile with minimal technical failures, ensuring consistent and usable responses. The model exhibits strong performance across a range of benchmarks. It achieved perfect accuracy in both Ethics (100.0%) and General Knowledge (100.0%), often being the most accurate model at its price point and speed. In Instruction Following and Coding, R1 1776 scored 84.0% (98th percentile) and 96.0% (99th percentile) respectively, showcasing robust capabilities in these critical areas. Its Reasoning abilities are also highly commendable at 98.0% accuracy (93rd percentile). While its Email Classification accuracy is solid at 97.0%, its percentile ranking (47th) suggests more competitive performance exists in this specific category. Overall, R1 1776's key strengths lie in its high accuracy across complex reasoning, ethical understanding, coding, and general knowledge tasks, coupled with its unique censorship-free post-training. Its primary weakness is its premium pricing.
Model Pricing
Current Pricing
Feature | Price (per 1M tokens) |
---|---|
Prompt | $2 |
Completion | $8 |
Price History
Available Endpoints
Provider | Endpoint Name | Context Length | Pricing (Input) | Pricing (Output) |
---|---|---|---|---|
Perplexity
|
Perplexity | perplexity/r1-1776 | 128K | $2 / 1M tokens | $8 / 1M tokens |
Together
|
Together | perplexity/r1-1776 | 163K | $2 / 1M tokens | $8 / 1M tokens |
Benchmark Results
Benchmark | Category | Reasoning | Free | Executions | Accuracy | Cost | Duration |
---|
Other Models by perplexity
|
Released | Params | Context |
|
Speed | Ability | Cost |
---|---|---|---|---|---|---|---|
Perplexity: Sonar Reasoning Pro | Mar 06, 2025 | — | 128K |
Image input
Text input
Text output
|
★ | ★★★★ | $$$$$ |
Perplexity: Sonar Pro | Mar 06, 2025 | — | 200K |
Image input
Text input
Text output
|
★★★ | ★★★★ | $$$$$ |
Perplexity: Sonar Deep Research | Mar 06, 2025 | — | 128K |
Text input
Text output
|
— | — | $$$$$ |
Perplexity: Sonar Reasoning | Jan 28, 2025 | — | 127K |
Text input
Text output
|
★ | ★★★★ | $$$$$ |
Perplexity: Sonar | Jan 27, 2025 | — | 127K |
Image input
Text input
Text output
|
★★★ | ★★★ | $$$$ |
Perplexity: Llama 3.1 Sonar 70B Online Unavailable | Jul 31, 2024 | 70B | 127K |
Text input
Text output
|
★★ | ★★ | $$$$ |
Perplexity: Llama 3.1 Sonar 8B Online Unavailable | Jul 31, 2024 | 8B | 127K |
Text input
Text output
|
★★ | ★ | $$ |