DMR News

Advancing Digital Conversations

Meta to Apply AI Labels More Broadly Across Platforms

ByHuey Yee Ong

Apr 8, 2024
Meta to Apply AI Labels More Broadly Across Platforms

Meta to Apply AI Labels More Broadly Across Platforms

Meta has announced plans to expand its labeling of AI-generated content across its platforms, acknowledging that its previous policy was too narrow. This decision comes in response to the Oversight Board’s recommendation following a review of a controversial video depicting President Joe Biden.

The video, which was edited to falsely suggest inappropriate behavior with his granddaughter, was allowed to remain on Facebook, highlighting the need for a policy reevaluation in light of the potential impact on the 2024 elections.

“Made with AI” Badge

Starting in May, Meta will introduce a “Made with AI” badge for a broader range of content, including:

  • Videos, images, and audio that exhibit industry-standard AI indicators.
  • Content acknowledged by users as AI-generated.
  • Posts flagged by fact-checkers, especially if the content is identified as false or altered, which may lead to potential downranking.

From Removal to Transparency

Meta aims to provide transparency and additional context rather than outright removing manipulated media, a stance that reflects a shift from its prior policy of taking down content violating its manipulated video guidelines. This change, set to take effect in July, is designed to help users better understand the nature of the content they are viewing without unnecessarily infringing on freedom of speech, as stated in a blog post.

The company had previously applied an “Imagined with AI” label to photorealistic images created using its own AI tools. The updated policy, however, goes further, indicating that more prominent labels may be used for content that poses a significant risk of deceiving the public on important matters, regardless of whether it was created by AI or other means.

Meta’s commitment to removing content that violates its policies remains strong, including

  • Content that violates policies against voter interference.
  • Bullying and harassment.
  • Violence and incitement.
  • Any other content that breaches Meta’s Community Standards.

The Oversight Board’s Approval

The Oversight Board, in its communication with Engadget, expressed satisfaction with Meta’s adoption of its recommendations, highlighting the importance of balancing freedom of expression with the prevention of offline harm, especially in a crucial election year.

Moreover, Meta’s initiative to label digitally created and altered content as “made with AI” on Facebook and Instagram marks a significant update before the U.S. presidential election. The company will also introduce a separate “high-risk” label for content that could significantly deceive the public on matters of importance, applying these labels to both AI-generated and manually altered content.

Meta’s strategy reflects a shift from removal to providing more context about how content was created, aiming to keep users informed about the origins of the media they consume. This approach is part of Meta’s broader effort to manage the challenges posed by AI-generated content, especially in the political arena where such technologies are rapidly evolving.

By labeling a wider range of content and implementing more distinct labels for high-risk material, Meta is enhancing the transparency and accountability of digital media. This initiative is critical in the context of an election year, where the accuracy and integrity of online content are paramount.


Related News:


Featured Image courtesy of Francis Mascarenhas/REUTERS

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *