DMR News

Advancing Digital Conversations

New TikTok Update Labels AI-Generated Content Automatically

ByHuey Yee Ong

May 12, 2024
New TikTok Update Labels AI-Generated Content Automatically

New TikTok Update Labels AI-Generated Content Automatically

On Thursday, May 9th, TikTok has initiated a significant update to its platform by beginning to automatically label videos and images that are created with artificial intelligence. This new feature is part of TikTok’s broader strategy to address the rising concerns over AI-generated misinformation, particularly with the 2024 election on the horizon.

The company will implement “Content Credentials” from the Coalition for Content Provenance and Authenticity (C2PA), a system designed to attach metadata to digital content, allowing for the identification and labeling of AI-generated material.

Expanding Transparency Across the Platform

The rollout of this feature initially covers images and videos, with plans to extend it to audio-only content in the near future. Content Credentials aim to provide a standardized method across the tech industry for ensuring content transparency and authenticity.

TikTok has also joined the Adobe-led Content Authenticity Initiative, a collaboration that focuses on establishing industry-wide standards for the digital production of images, videos, and audio clips, making their creation transparent and traceable.

How Are Tech Companies Combating AI Misinformation?

This measure builds on existing practices within TikTok, where the platform already mandates that content created using its in-app AI effects be labeled. The new policy expands this requirement, applying automatic labeling to AI-generated content uploaded from other platforms as well.

The intention is to make users aware of the origins and nature of the content they consume, particularly as AI-generated content becomes more realistic and potentially misleading.

The concern over AI’s role in spreading misinformation is shared by many in the industry and government. In February, TikTok joined forces with other leading tech companies—including Microsoft, Meta, Google, Amazon, and OpenAI—in a commitment to combat AI-driven misinformation during the 2024 election cycle.

Moreover, TikTok’s engagement in these initiatives occurs amidst legislative pressures in the United States. After President Joe Biden signed a law in April giving TikTok’s parent company, ByteDance, a nine-month ultimatum to divest the app or face a prohibition, TikTok sued the U.S. government. The lawsuit argues that the ban infringes on First Amendment rights, underscoring the tense backdrop against which these technological and policy updates are unfolding.

Educational Efforts to Combat Misinformation

In addition to technical solutions, TikTok is actively working on educational campaigns to improve media literacy. The platform is developing resources in partnership with:

  • The Poynter Institute’s Mediawise project: Aimed at educating users on distinguishing credible information from misinformation.
  • WITNESS: A human rights organization that teaches civilians how to use technology to record and protect themselves.

These initiatives are designed to help users discern and understand AI-generated content and deepfakes, which are becoming increasingly prevalent and convincing.

Other social media giants are also adopting similar measures. Meta and Snapchat, for example, have introduced new labeling systems for AI-generated content, with Snapchat implementing a visible watermark to signal AI involvement.

Related News:

Featured Image courtesy of TOLGA AKMEN/AFP via Getty Images

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *