DMR News

Advancing Digital Conversations

YouTube Adds New Label Feature for AI-Generated Videos

ByHuey Yee Ong

Mar 21, 2024

YouTube Adds New Label Feature for AI-Generated Videos

YouTube has unveiled a new feature that allows creators to label AI-generated or synthetic content in their videos, marking a significant step towards transparency on the platform.

This announcement follows a previous statement from YouTube indicating that, starting in 2024, creators would need to disclose any AI-generated material. This requirement is part of a broader trend among content creation platforms, where such disclosures are typically made on a voluntary basis.

How Does The Labeling Feature Work?

As part of the video uploading process, creators will now find a checkbox to indicate whether their content includes “altered or synthetic” elements that could be mistaken for reality.

Examples of AI-Generated Content to Label:

  • Person Impersonation: Making a real person appear to say or do something they didn’t.
  • Altered Footage: Changing footage of real events and places.
  • Fictitious Scenes: Showing a realistic-looking scene that didn’t actually happen.

YouTube aims to ensure viewers are aware when they’re watching content that might depict, for example, a fabricated tornado approaching a town or a celebrity voice synthetically narrated over a video.

An example of what the AI label appears on the YouTube mobile app video player. Credit: YouTube

What Doesn’t Need to be Labeled?

YouTube clarifies that some enhancements does not require disclosure, such as:

  • Beauty filters
  • Background blur
  • Content that’s clearly fictional or animated

This nuanced approach follows YouTube’s November announcement, which outlined a dual-tier policy focusing on protecting music labels and artists from unauthorized deepfake renditions of their work, while imposing less stringent guidelines on other types of content.

For instance, a music label can request the removal of a video featuring a deepfake version of a song performed by another artist. Meanwhile, individuals featured in deepfake content must navigate a more complex process, submitting a privacy request form for YouTube’s review, a procedure that remains under development.

Can YouTube Effectively Detect AI-Generated Content?

The effectiveness of this self-labeling strategy hinges on creators’ honesty, a common reliance across platforms introducing similar features.

Despite the challenges in accurately detecting AI-generated content through software, YouTube has expressed its commitment to identifying such content proactively, particularly when it might mislead viewers.

The platform has also promised more conspicuous labeling for videos covering sensitive subjects like health and politics, underscoring its commitment to preventing misinformation.


Related News:


Featured Image courtesy of SOPA Images/LightRocket via Getty Images

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *