
X appears to be introducing a feature that labels edited images as “manipulated media,” but the company has not explained how the designation will be applied or whether it covers edits made with traditional tools such as Photoshop.
Announcement Via Musk And Proxy Account
The only indication of the change came from Elon Musk, who reposted a message on X that read “Edited visuals warning,” alongside an announcement from the anonymous account DogeDesigner. The account has previously been used to preview new X features that Musk later amplifies.
DogeDesigner said the feature could make it harder for legacy media groups to spread misleading clips or images and claimed the labeling capability is new to X. The company has not published documentation describing how the feature works.
Historical Precedent At Twitter
Before being acquired and rebranded as X, the service then known as Twitter labeled posts containing manipulated, deceptively altered, or fabricated media as an alternative to removing them.
In 2020, former site integrity head Yoel Roth said the policy applied beyond artificial intelligence and included edits such as cropping, slowing video, overdubbing audio, or altering subtitles. It is unclear whether X is reviving those standards or introducing new criteria.
Unclear Enforcement And Scope
X’s current help documentation references a policy against sharing inauthentic media, but enforcement has been inconsistent. Recent incidents involving the circulation of non-consensual deepfake images highlighted gaps in moderation, while manipulated images have also been shared by official accounts, including the White House.
The company has not clarified whether the “manipulated media” label applies specifically to AI generated images, AI assisted edits, or any visual content altered after capture.
Lessons From Meta’s Labeling Effort
The ambiguity mirrors challenges faced by Meta when it introduced AI image labeling in 2024. Meta’s systems incorrectly tagged real photographs with a “Made with AI” label after detecting metadata changes caused by common editing tools.
In some cases, Adobe cropping features flattened images before export, triggering Meta’s detection. In others, Adobe’s Generative Fill tool used to remove small elements caused images to be labeled as AI created. Meta later revised the label to “AI info” to reduce misclassification.
Broader Industry Approaches
Other platforms have adopted labeling systems with varying levels of transparency. TikTok labels AI generated content, while music streaming services have introduced markers for AI generated tracks. Google Photos uses provenance standards to show how images were created.
A standards body known as C2PA works on verifying content authenticity through tamper evident metadata. Its steering committee includes companies such as Microsoft, BBC, Adobe, Arm, Intel, Sony, and OpenAI.
X is not currently listed as a C2PA member. The company has not said whether its new labeling feature aligns with any established provenance standards or relies on internal detection methods.
Outstanding Questions
Key details remain unresolved, including whether users can dispute a “manipulated media” label beyond X’s Community Notes system, and whether the feature is newly deployed or a rebranded version of existing policies.
Given X’s role as a distribution channel for political content and propaganda, the absence of published standards leaves uncertainty over how images will be assessed and labeled going forward.
Featured image credits: creativecommons.org
For more stories like it, click the +Follow button at the top of this page to follow us.
