Meta announced on Monday that it is intensifying efforts to combat accounts sharing “unoriginal” content on Facebook—specifically those that repeatedly reuse text, photos, or videos created by others. So far this year, the company has removed around 10 million profiles impersonating major content creators.
In addition, Meta has taken action against 500,000 accounts involved in spammy behaviors or fake engagement, such as artificially boosting content visibility. These measures include demoting comments and reducing the reach of their posts to prevent these accounts from monetizing.
Following YouTube’s Lead on Content Policies
This announcement follows YouTube’s recent clarification of its policies targeting unoriginal content, including mass-produced and repetitive videos—issues amplified by the rise of AI-generated media.
Meta emphasizes that users who engage with others’ content—through reaction videos, trends, or adding personal commentary—will not be penalized. Instead, the crackdown targets spam accounts or impersonators who repost others’ content without significant original input.
Accounts caught abusing the system will lose access to Facebook monetization programs for a time and experience reduced content distribution. When duplicate videos are detected, Meta will prioritize the original creator’s content.
Meta is also testing a feature that adds links on duplicated videos directing viewers to the original source.
Addressing User Concerns on Enforcement
The crackdown arrives amid user criticism over automated enforcement errors across Meta’s platforms, including Instagram. Nearly 30,000 people have signed a petition demanding fixes for wrongful account suspensions and better human support, which many small businesses say have been severely impacted.
Meta has yet to publicly address these concerns despite attention from creators and media.
While the update primarily targets reused content, it hints at growing attention to “AI slop”—low-quality, AI-generated media that often consists of basic visuals combined with AI narration.
Meta advises creators to avoid simply stitching clips together or applying watermarks without genuine storytelling. The company also encourages high-quality captions and discourages overreliance on unedited automated subtitles.
Transition Period for Creators
The new rules will roll out gradually over the coming months, allowing creators time to adapt. Facebook creators can now access detailed post-level insights to understand any distribution issues. Additionally, they can monitor potential penalties via the Support home screen on their professional profiles.
In its recent Transparency Report, Meta disclosed that 3% of Facebook’s global monthly users were fake accounts, and it took action against 1 billion such accounts between January and March 2025.
Meta has also shifted from direct fact-checking to a community-driven approach called Community Notes in the U.S., similar to the system used by X, which lets users help assess content accuracy and adherence to standards.
Author’s Opinion
Meta’s latest crackdown reflects the complex balance platforms must maintain—fighting content theft and spam while fostering genuine creativity and expression. The rise of AI-generated content complicates moderation, blurring lines between original and derivative work. Automated enforcement tools can catch bad actors but risk alienating legitimate creators if overused. Transparent, nuanced policies combined with meaningful human oversight are crucial to protect both creators and users in this evolving digital landscape.
Featured image credit: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.