DMR News

Advancing Digital Conversations

Chinese AI Video Model Allegedly Censors Politically Sensitive Content

ByYasmeeta Oon

Jul 25, 2024

Chinese AI Video Model Allegedly Censors Politically Sensitive Content

A new video-generating AI model named Kling, developed by Beijing-based Kuaishou, is now widely accessible. However, it appears to censor politically sensitive topics linked to its country of origin, China.

Kling was initially available to users with a Chinese phone number through waitlisted access. As of today, it is open to anyone who registers with an email. Users can input prompts to generate five-second videos, which Kling produces in about a minute or two. The videos are in 720p resolution and adhere closely to the prompts, simulating physics like rustling leaves and flowing water effectively.

However, Kling refuses to create videos on specific subjects. For instance, prompts like “Democracy in China,” “Chinese President Xi Jinping walking down the street,” and “Tiananmen Square protests” result in a nonspecific error message. The filtering seems to occur at the prompt level, as the model will still animate images, including portraits of Xi Jinping, provided his name is not mentioned in the prompt.

Image credit: Kuaishou

The likely reason for this behavior is political pressure from the Chinese government. Earlier this month, the Financial Times reported that the Cyberspace Administration of China (CAC) would test AI models to ensure they align with “core socialist values”. This includes checking responses on sensitive topics, particularly those related to Chinese leadership and the Communist Party.

The CAC has even proposed a blacklist of sources that can’t be used to train AI models. Companies must prepare numerous questions to test whether the models provide “safe” responses. These regulations have led to AI systems in China, like Baidu’s Ernie chatbot, avoiding politically sensitive questions.

China’s strict policies may slow down its AI advancements, as developers spend more time creating ideological safeguards. From a user’s perspective, these regulations are creating two distinct classes of AI models: those heavily filtered and others with fewer restrictions.


Featured Image courtesy of DALL-E by ChatGPT

Follow us for more updates on Kling AI.

Yasmeeta Oon

Just a girl trying to break into the world of journalism, constantly on the hunt for the next big story to share.

Leave a Reply

Your email address will not be published. Required fields are marked *