YouTube now allows users to request the removal of AI-generated content that simulates their face or voice. This policy, quietly rolled out in June, is part of YouTube’s expanded privacy request process, as initially reported by TechCrunch’s Sarah Perez.
So, how does YouTube define privacy violations for AI-generated content?
Instead of treating AI-generated content as misleading, YouTube now classifies it as a privacy violation. Users can flag videos that “used AI to alter or create synthetic content that looks or sounds like you.” The platform requires first-party claims for these requests, except in specific cases like minors, individuals without computer access, or deceased persons.
Submitting a Request Does Not Guarantee Removal
YouTube evaluates each complaint based on several factors.
These include whether the content is labeled as synthetic, whether it uniquely identifies a person, and whether it can be considered parody, satire, or public interest content. The platform also considers if the AI content involves public figures or individuals engaged in sensitive behaviors, like criminal activity, violence, or endorsing products or political candidates. This is particularly crucial during election years.
YouTube gives the content uploader 48 hours to address the complaint. If the content is removed within this period, the complaint is closed. Otherwise, YouTube initiates a review.
Removal involves fully deleting the video and any associated personal information from titles, descriptions, and tags. Users cannot comply by simply making the video private, as it can be reverted to public status.
YouTube Did Not Widely Advertise This Policy Change
However, earlier this year, it introduced a tool in Creator Studio for creators to disclose when content is synthetic. More recently, YouTube tested a feature allowing users to add notes providing context about videos, such as indicating if the content is parody or misleading.
YouTube continues to explore AI, experimenting with tools like a comments summarizer and a conversational tool for video-related queries. Despite this, the platform emphasizes that labeling content as AI-generated does not protect it from removal if it violates YouTube’s Community Guidelines.
Privacy Complaints vs. Community Guidelines Strikes
Regarding privacy complaints, YouTube clarifies that these are separate from Community Guidelines strikes. Creators notified of a privacy complaint will not automatically receive a strike. However, YouTube may take action against accounts with repeated violations.
The company representative highlighted that privacy violations are distinct from Community Guidelines strikes and will not result in automatic penalties but may lead to account actions if violations recur.
Featured Image courtesy of Alexander Shatov on Unsplash