Here’s How DeepSeek Censorship Actually Works—and How to Get Around It

A WIRED investigation shows that the popular Chinese AI model is censored on both the application and training level. While the firm seems to have an edge on US rivals in terms of math and reasoning, it also aggressively censors its own replies. Ask DeepSeek R1 about Taiwan or Tiananmen, and the model is unlikely to give an answer.

Technical Investigation of Censorship Levels

To figure out how this censorship works on a technical level, WIRED tested DeepSeek-R1 on its own app, a version of the app hosted on a third-party platform called Together AI, and another version hosted on a WIRED computer, using the application Ollama. WIRED found that while the most straightforward censorship can be easily avoided by not using DeepSeek’s app, there are other types of bias baked into the model during the training process. These findings have major implications for DeepSeek and Chinese AI companies generally.

Depending on how the model is accessed, the level of censorship varies:

  • DeepSeek App/Website: Refusals are triggered on an application level, so they’re only seen if a user interacts with R1 through a DeepSeek-controlled channel.
  • Third-Party Platforms (e.g., Together AI): Straightforward censorship can be easily avoided by not using DeepSeek’s app.
  • Local Hosting (e.g., Ollama): Data and the response generation happen on your own computer, removing real-time application-level filters.

Legal Compliance and Information Controls

Rejections like this are common on Chinese-made LLMs. A 2023 regulation on generative AI specified that AI models in China are required to follow stringent information controls that also apply to social media and search engines. The law forbids AI models from generating content that “damages the unity of the country and social harmony.” In other words, Chinese AI models legally have to censor their outputs.

“DeepSeek initially complies with Chinese regulations, ensuring legal adherence while aligning the model with the needs and cultural context of local users,” says Adina Yakefu, a researcher focusing on Chinese AI models at Hugging Face. This is an essential factor for acceptance in a highly regulated market.

Real-Time Monitoring Mechanisms

To comply with the law, Chinese AI models often monitor and censor their speech in real time. Because R1 is a reasoning model that shows its train of thought, this real-time monitoring mechanism can result in the surreal experience of watching the model censor itself as it interacts with users. For example, when R1 was asked about sensitive topics, the model first started compiling a long answer; yet shortly before it finished, the whole answer disappeared and was replaced by a terse message: “Sorry, I'm not sure how to approach this type of question yet.”

How to Get Around the Censorship Matrix

The fact that R1 is open source means there are ways to get around the censorship matrix. First, you can download the model and run it locally. If you’re dead set on using the powerful model, you can rent cloud servers outside of China from companies like Amazon and Microsoft.

Available Versions for Local Use:

  • Full R1 Model: Requires access to several highly advanced GPUs to run the most powerful version.
  • Distilled Versions: DeepSeek has smaller, distilled versions that can be run on a regular laptop.

If the censorship filters on large language models can be easily removed, it will likely make open-source LLMs from China even more popular, as researchers can modify the models to their liking. However, if the filters are hard to get around, the models will inevitably prove less useful and could become less competitive on the global market.