China’s DeepSeek has a big censorship problem, as it refuses to answer questions about events like Tiananmen Square or the beloved Disney character Winnie the Pooh.
DeepSeek R1: Advanced AI with Built-in Censorship
DeepSeek’s latest AI model, R1, has garnered significant attention for its advanced capabilities and cost-effective development. However, users have reported that R1 consistently avoids responding to questions about China’s problems, particularly those deemed politically sensitive. This behaviour is attributed to built-in censorship mechanisms that align the AI’s outputs with the Chinese government’s directives.
Censorship on Politically Sensitive Topics
Tiananmen Square and Human Rights Issues
When users ask about events like the 1989 Tiananmen Square protests or human rights issues in China—such as the treatment of Uighurs—the chatbot often replies with a generic response like:
“Sorry, that’s beyond my current scope. Let’s talk about something else.”
Meanwhile, US-based chatbots like ChatGPT and Gemini have no such restrictions and provide detailed responses to these search queries.
Taiwan and the “One China” Narrative
Another sensitive topic in China is the status of Taiwan. When asked whether Taiwan is a country, DeepSeek maintains that:
“Taiwan has always been an inalienable part of China’s territory since ancient times.”
Notably, in these instances, DeepSeek’s model switches to the first-person pronoun “we” while reinforcing the Chinese government’s stance.
Winnie the Pooh Ban
The popular Disney character Winnie the Pooh has frequently been used online to satirize Xi Jinping and has been banned in China. Predictably, DeepSeek evades answering questions about the character.
When asked why Winnie the Pooh is banned, the chatbot reiterates that China wishes to maintain a “wholesome cyberspace environment” and protect its “socialist core values.”
China’s AI Regulations and Government Oversight
DeepSeek’s reluctance to address China’s problems is likely influenced by the country’s AI regulations. These policies mandate adherence to the “core values of socialism” and prohibit content that could “incite subversion of state power” or “undermine national unity.” AI providers are held responsible for preventing the generation and transmission of so-called “illegal content.”
Bias in Chatbots and LLMs
Bias in chatbots and large language models (LLMs) has once again come under scrutiny after DeepSeek’s selective responses. However, this isn’t new.
Other AI models, including OpenAI’s ChatGPT and Google’s Gemini, have also faced criticism for political bias and content suppression. Experts argue that AI biases stem from:
- Training data
- Developer policies
- Government regulations
These factors shape how chatbots handle controversial subjects.
Conclusion
While DeepSeek’s R1 model demonstrates impressive technical capabilities, its built-in censorship mechanisms raise concerns about government control over AI outputs. This development highlights the complex interplay between technological advancement and political oversight in the field of artificial intelligence.