China’s AI Censors Unveil Digital Authoritarianism Globally

Image credit X.com China MFA
China’s AI Censorship Embeds State Ideology into Global Tech Tools
By TRH Tech Desk
NEW DELHI, July 14, 2025 –China’s rapid adoption of artificial intelligence is transforming not just innovation but state control — especially in the realm of censorship.
A recent article by Yasemin Yam in Global Voices highlights growing concerns that AI tools developed in China are embedding censorship protocols directly into their code. This marks a shift from passive moderation to proactive narrative control, argued the analyst.
Traditionally, China’s online censorship depended on thousands of human censors scanning content for politically sensitive topics. This labour-intensive model is now being replaced. With the rise of large language models (LLMs) like DeepSeek, Qwen, and Ernie, censorship is becoming automated, seamless — and far more ideological.
“AI in China no longer just reacts to what’s said online — it increasingly defines what can be thought or asked,” Yam quoted researcher Alex Colville from the China Media Project.
Colville cited a striking example: Just four months ago, DeepSeek’s open-source model would list China’s territorial disputes in the South China Sea. The latest version no longer does so — and instead reiterates China’s official position more forcefully.
State Regulations Enforce Ideological AI
Yam noted that China’s transformation in AI censorship is being driven by state policies and a strict regulatory framework:
- Since 2022, AI platforms must “spread positive energy” and follow “mainstream values.”
- In 2023, guidelines required AI providers to follow “correct political direction and values.”
- By mid-2023, China introduced the Interim Measures for Generative AI, requiring all AI outputs to uphold “Socialist Core Values” and be trained only on government-approved data.
Training the Machines to Self-Censor
A leaked 300 GB dataset, analyzed by the China Media Project, shows just how sophisticated this censorship has become. It contains 133,000 prompts and responses designed to teach AI how to classify, rank, and respond to politically sensitive content across 38 categories — from innocuous topics like sports to politically volatile ones.
AI systems are being taught not only what not to say — but also how to frame information within state-approved ideological limits, Yam argued.
The article quoted Xiao Qiang, a researcher at UC Berkeley, who claimed that AI-enabled Chinese intervention “marks a leap in censorship.” “LLMs trained this way don’t need human monitors. They’re pre-programmed to avoid dissent entirely, at scale,” added Xiao.
China’s “Lawful” AI Benchmarks
To further enforce ideological control, Yam stated that China is developing its own AI evaluation tools.
- In 2023, engineers launched the C-Eval benchmark, featuring over 13,000 questions covering topics such as “Mao Zedong Thought,” “Marxism,” and “Moral Cultivation.”
- In 2024, the AI Safety Benchmark, released by a state-backed institution, added 400,000 prompts to ensure outputs are politically correct, psychologically safe, and legally compliant — by Beijing’s standards.
Western Tools Also Affected
Censorship doesn’t stop at China’s borders, stated Yam in the article. Exiled Chinese dissident Teacher Li shared on X that Microsoft’s Copilot, when used in China, refused to answer a question about “bringing down Xi Jinping” but answered the same query about “Donald Trump.”
This suggests that foreign AI tools operating in China may be adapting to local censorship rules, raising alarms about their independence.
Global Propaganda, Powered by AI
Chinese efforts to shape global narratives through AI are also on the rise, Yam further argued in the article. An OpenAI report revealed that Chinese actors used AI models to monitor anti-China speech, publish anti-US comments in Spanish, and generate propaganda against Chinese dissidents such as Cai Xia.
Yam cautioned that as tools like DeepSeek gain traction globally for their low cost and high performance, embedded political filters may influence how millions of users’ worldwide access and understand information.
Yet, open-source models are cited as much a threat to the CCP as they are a boon. “They give unprecedented power to individuals. Their spread is hard to regulate, and their emergence is hard to predict,” wrote Dror Poleg, an economic historian, claiming that the DeepSeek caught the CCP off guard. They reacted quickly but might miss the next one, he added.
“China’s model illustrates how authoritarian states are weaponizing generative AI — not just for surveillance, but for ideological enforcement,” Yam wrote. While global AI governance remains patchy and human rights protections weak, Beijing is building a future where digital tools serve the state’s political goals by default.
The worry isn’t just that AI will censor — it’s that AI will think like a censor.
Follow The Raisina Hills on WhatsApp, Instagram, YouTube, Facebook, and LinkedIn