
X says it will penalize creators who post AI-generated videos depicting armed conflict without clearly stating that the material was produced using artificial intelligence.
The new rule was announced Tuesday by Nikita Bier, who said creators violating the policy will be suspended from the platform’s Creator Revenue Sharing Program for 90 days.
If the same users continue posting misleading AI-generated conflict videos after the suspension ends, they will be permanently removed from the program.
Disclosure Required For AI War Content
Bier said the move is intended to limit misinformation during wartime, when manipulated videos could spread quickly and influence public understanding of events.
“During times of war, it is critical that people have access to authentic information on the ground,” Bier wrote in a post on X. “With today’s AI technologies, it is trivial to create content that can mislead people.”
Under the new rule, users must clearly disclose when a video of armed conflict is generated using AI tools.
If that disclosure is missing, the creator will lose access to monetization through the revenue sharing program for three months.
How X Will Identify Misleading Content
X said enforcement will rely on a combination of automated systems and its crowdsourced fact-checking feature, Community Notes.
The platform plans to use detection tools designed to identify generative AI media, alongside reports and context added by Community Notes contributors.
The policy focuses specifically on monetized content tied to conflict footage rather than banning the videos outright.
Creator Monetization Under Scrutiny
X’s Creator Revenue Sharing Program allows users to earn income from advertising revenue generated by posts that attract high engagement.
The initiative was introduced to encourage more content creation on the platform, but critics say it also incentivizes sensational or emotionally charged posts that drive clicks and interactions.
Some critics have argued that the system can reward creators who publish misleading or provocative material designed to trigger outrage or viral reactions.
Limits Of The New Policy
The new restriction applies specifically to AI-generated content related to armed conflict.
However, other forms of AI-generated media such as political misinformation, manipulated influencer promotions, or misleading product endorsements are not covered by the policy.
As generative AI tools make it increasingly easy to produce convincing fake images and videos, analysts say platforms continue to struggle with how to limit misuse while still allowing creators to use the technology for legitimate purposes.
Featured image credits: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.
