DeepSeek R1: Stricter Censorship After AI Model Update

Chinese AI startup DeepSeek’s latest AI model has achieved remarkable results in coding, mathematics and general knowledge benchmarks, almost outperforming OpenAI’s flagship o3 model. But the upgraded R1, also known as “R1-0528,” may be less willing to answer controversial questions, especially on topics the Chinese government deems contentious.

According to tests conducted by the anonymous developers behind SpeechMap, R1-0528 has far less tolerance for controversial free speech topics than previous versions of DeepSeek and is “by far the most strictly censored DeepSeek model criticizing the Chinese government.” SpeechMap is a platform that compares how different models handle sensitive and controversial topics.

As Wired explained in a January article, Chinese models must comply with strict information controls. A 2023 law prohibits models from generating content that “endangers national unity and social harmony,” which can be interpreted as contradicting government historical and political narratives. To comply with regulations, Chinese startups typically censor their models by using prompt-level filters or fine-tuning. A study found that DeepSeek’s initial R1 refused to answer 85% of questions on politically controversial topics considered by the Chinese government.

According to xlr8harder, R1-0528 censors answers on topics such as detention camps in China’s Xinjiang region, where more than one million Uighur Muslims have been arbitrarily detained. While it will sometimes criticize certain aspects of Chinese government policy - in xlr8harder’s tests it offered Xinjiang camps as examples of human rights abuses - the model often gives the Chinese government’s official position when asked directly.

TechCrunch also observed this in our brief tests.

Chinese publicly available AI models, including video generation models, have been criticized in the past for censoring topics sensitive to the Chinese government, such as the Tiananmen Square massacre. In December, Clément Delangue, CEO of AI development platform Hugging Face, warned of the unintended consequences of Western companies building on high-performing, publicly licensed Chinese AI.

The impact of censorship on AI models has long been a concern, especially in the context of geopolitical tensions. The case of DeepSeek R1-0528 highlights the complex balance between pursuing advances in AI technology and upholding freedom of thought and access to information. It is worth exploring in depth how DeepSeek is responding to these challenges, and what this means for the future development of the AI industry.

Definition and Forms of Censorship

Censorship, broadly defined as the restriction or suppression of information, can take many forms. In the field of artificial intelligence, censorship usually manifests itself as:

  • Content Filtering: Preventing the model from generating or displaying certain types of content, such as those involving politically sensitive topics, violence, discrimination, etc.
  • Information Distortion: The information presented by the model is modified or distorted to conform to a certain ideology or political stance.
  • Answer Evasion: The model refuses to answer certain questions, or gives vague, ambiguous answers.
  • Prompt Engineering: Guiding the model to give answers that conform to a specific intention through carefully designed prompts.

The DeepSeek R1-0528 case suggests that the model may have adopted several of the above censorship methods, especially when it comes to topics sensitive to the Chinese government.

Reasons and Motivations for Censorship

The reasons and motivations for censoring AI models are often multifaceted:

  • Laws and Regulations: Some countries or regions have enacted laws and regulations that require AI models to comply with specific information control standards. For example, relevant Chinese laws prohibit models from generating content that “endangers national unity and social harmony.”
  • Political Pressure: Governments or political groups may exert pressure on AI companies to censor the content of their models in order to maintain their political interests.
  • Social Responsibility: AI companies may actively censor the content of their models out of social responsibility considerations in order to avoid spreading harmful information or causing social unrest.
  • Commercial Interests: In order to avoid offending the government or the public, AI companies may censor the content of their models to protect their commercial interests.

As a Chinese AI company, DeepSeek may face considerations from laws and regulations, political pressure, and social responsibility, which may force it to censor R1-0528.

Potential Impacts of Censorship

AI model censorship may have the following potential impacts:

  • Restricting Information Access: Censorship limits users’ access to comprehensive and objective information, thereby affecting their judgment and decision-making.
  • Stifling Innovation: Censorship limits the development of AI technology because researchers may not be free to explore and test various ideas.
  • Exacerbating Social Divisions: Censorship may exacerbate social divisions, as different groups may only have access to information that aligns with their own positions.
  • Damaging Trust: If users find that an AI model engages in censorship, they may lose trust in the model.

The DeepSeek R1-0528 case suggests that censorship may restrict users’ access to information on topics sensitive to the Chinese government.

Strategies for Responding to AI Censorship

The following strategies can be adopted to respond to AI censorship:

  • Technical Means: Develop technologies that can detect and bypass censorship filters.
  • Legal Action: File legal action against censorship that violates freedom of speech.
  • Public Advocacy: Raise public awareness of AI censorship and call on governments and businesses to take action.
  • Decentralized AI: Develop decentralized AI platforms to reduce the possibility of censorship.
  • Open Source Collaboration: Encourage open source collaboration to jointly develop more open and transparent AI models.

DeepSeek’s Response

DeepSeek has not publicly responded to allegations of censorship of R1-0528. If DeepSeek responds to this, it is worth paying attention to the following aspects:

  • Does DeepSeek admit to censoring R1-0528?
  • If so, what are DeepSeek’s reasons and motivations for censorship?
  • Does DeepSeek plan to change its censorship policy?
  • How does DeepSeek balance technological progress with freedom of information?

DeepSeek’s response will have a significant impact on the AI industry.

Censorship and Ethical Considerations

AI censorship raises a series of ethical issues, including:

  • Freedom of Speech: Should AI models enjoy freedom of speech?
  • Access to Information: Do users have the right to access comprehensive and objective information?
  • Transparency: Are AI companies obligated to disclose their censorship policies?
  • Responsibility: Who should be responsible for AI censorship?
  • Trust: How to build trust in the age of AI?

These ethical issues need to be explored in depth.

The Specificity of Chinese Censorship

China’s censorship system has its own specificity, mainly manifested in the following aspects:

  • Wide Scope: China’s censorship scope covers politics, history, culture, religion and other fields.
  • Strict Intensity: China’s censorship intensity is very strict, even involving censorship of personal speech.
  • Advanced Technology: China has a huge censorship team and uses advanced technical means for censorship.
  • Legal Support: China has formulated a series of laws and regulations to provide legal support for the censorship system.

These specificities make developing AI models in China a unique challenge.

Comparison of Global AI Censorship Systems

In addition to China, other countries also have different forms of AI censorship, mainly manifested in the following aspects:

  • Europe: The European Union has introduced the “Artificial Intelligence Act” to regulate the application of AI and prevent it from being used to discriminate or violate human rights.
  • United States: The United States mainly regulates the development of AI through market mechanisms and industry self-regulation, but there are also some disputes regarding content censorship.
  • Other Countries: Other countries have also formulated different AI supervision policies based on their own national conditions, some of which may involve content censorship.

By comparing AI censorship systems in different countries, we can better understand the complexity and diversity of censorship.

Future trends in AI censorship may include the following aspects:

  • Technological Progress: Censorship technology and anti-censorship technology will continue to develop, forming a cat-and-mouse game.
  • Strengthened Supervision: Governments around the world may strengthen supervision of AI, including content censorship.
  • International Cooperation: Countries may strengthen international cooperation in AI governance, including content censorship.
  • Social Attention: All sectors of society will pay more attention to the impact of AI censorship and call for more responsible practices.

Impact of Censorship on DeepSeek

As a Chinese AI company, DeepSeek’s development is profoundly affected by China’s censorship system. DeepSeek needs to strike a balance between complying with Chinese laws and regulations and meeting user needs. DeepSeek’s future development will depend on how it responds to the challenges posed by censorship.

AI and Bias

Censorship is closely related to the problem of AI bias. Censorship may cause the model to only learn partial information, resulting in bias. DeepSeek needs to take measures to ensure that its models can learn comprehensive and objective information and avoid bias.

Transparency and Explainability

In order to cope with the challenges brought by censorship, DeepSeek should improve the transparency and explainability of its models. DeepSeek should disclose its censorship policies and explain how its models handle sensitive topics. This will help build user trust and promote the healthy development of AI.

Conclusion

The DeepSeek R1-0528 case highlights the complexity and importance of AI censorship. Censorship has a significant impact on information access, technological innovation, social divisions, and user trust. Responding to censorship requires multiple strategies such as technical means, legal action, public advocacy, decentralized AI, and open source collaboration. As a Chinese AI company, DeepSeek needs to strike a balance between complying with Chinese laws and regulations and meeting user needs. DeepSeek should improve the transparency and explainability of its models to cope with the challenges brought by censorship.