In recent weeks, TikTok has seen a troubling increase in racist and disrespectful videos. Many of these posts have been identified as AI-generated, reportedly created using Google’s newly launched Veo 3 platform, which debuted in May.
The videos include blatant discrimination against people of color and mock various races and religions around the world. Media Matters for America (MMFA) revealed that these AI-generated clips promote hate and ridicule based on skin color, ethnicity, and harmful cultural stereotypes.
Targeted Groups and Themes
Most videos focus on racist tropes against Black individuals, including offensive animalistic comparisons and stereotypes involving food. Other clips contain antisemitic content and misleading portrayals of immigrants and protesters, some depicting violent scenarios and references to concentration camps.
MMFA’s investigation showed many videos carried the Veo 3 watermark, indicating this AI platform was used to generate hateful content. This is especially concerning given Google’s policies restricting hateful and explicit material on Veo 3.
Despite TikTok’s prohibition of hate speech and harmful content, many racist AI videos have reached millions of viewers with high engagement, exposing weaknesses in the platform’s content moderation.
The Growing Threat of AI Misuse Online
AI technology, while innovative, is increasingly exploited by bad actors to spread harmful propaganda and deepfakes. Instances like a Maryland principal being falsely depicted as racist in AI videos demonstrate the real-world damage caused by AI misuse.
Lawmakers and regulators are pushing companies to strengthen content filters and protections. Legislative efforts aim to shield the public from the misuse of AI in spreading hate and misinformation.
Author’s Opinion
Artificial intelligence offers tremendous opportunities but also significant risks when misused. The rise of racist AI content on platforms like TikTok highlights the urgent need for responsible AI governance and effective moderation. Technology firms must work closely with regulators to prevent AI tools from amplifying hate and division. Protecting vulnerable communities and preserving social harmony require swift and decisive action.
Featured image credit: FMT
For more stories like it, click the +Follow button at the top of this page to follow us.