DMR News

Advancing Digital Conversations

Instagram Chief Adam Mosseri Disputes MrBeast’s AI Concerns, Says Society Must Adapt

ByHilary Ong

Oct 14, 2025

Instagram Chief Adam Mosseri Disputes MrBeast’s AI Concerns, Says Society Must Adapt

Instagram head Adam Mosseri stated this week that artificial intelligence will fundamentally change who can be a creator, granting people who were previously unable to produce content at a certain quality and scale the ability to do so. However, the Meta executive acknowledged that bad actors will inevitably use the technology for “nefarious purposes,” stressing that children growing up today will have to be taught that simply seeing a video of something does not mean it actually happened.

AI’s Impact on Content Production

Mosseri shared his thoughts on the creator industry’s transformation at the Bloomberg Screentime conference. He was asked to address the recent concerns raised by mega-creator MrBeast (Jimmy Donaldson), who warned that AI-generated videos could soon threaten creators’ livelihoods. Mosseri countered this anxiety, noting that most creators won’t use AI to replicate the huge sets and elaborate productions characteristic of MrBeast’s work. Instead, AI will serve as a tool that allows a much broader range of creators to be more productive and generate better content. Mosseri drew a historical parallel: “What the internet did… was allow almost anyone to become a publisher by reducing the cost of distributing content to essentially zero.” He continued, “And what some of these generative AI models look like they’re going to do is they’re going to reduce the cost of producing content to basically zero.” Mosseri also suggested that much of the content on platforms today is already “hybrid,” where creators use AI for minor elements like color corrections or filters, meaning the line between real and AI-generated will become even more blurred. He predicted that the future will feature more of this “middle” ground than purely synthetic content for a while.

The Challenge of Labeling and Trust

Mosseri admitted that Meta has a responsibility to improve its methods for identifying AI-generated content. However, he suggested that the company’s initial approach to automatically labeling AI content had been “a fool’s errand,” referencing a situation where Meta’s systems incorrectly flagged real content as synthetic because AI tools (including those from Adobe) had been used as part of the production workflow. The executive confirmed that the labeling system needs more work, but argued that Meta also needs to provide users with more context to help them make informed decisions. He may have been hinting at an expansion of Meta’s Community Notes feature—a crowdsourced fact-checking system modeled after the one used by X, where users sharing diverse opinions mark content with necessary corrections or context.

Mosseri argued that the final responsibility for digital trust must ultimately fall to society itself, rather than solely on the platforms. He spoke about his own young children, saying, “My kids are young… I need them to understand, as they grow up and they get exposed to the internet, that just because they’re seeing a video of something doesn’t mean it actually happened.” He contrasted his own experience: “When I grew up, and I saw a video, I could assume that that was a capture of a moment that happened in the real world.” Mosseri concluded that young users “are going to… need to think about who is saying it, who’s sharing it, in this case, and what are their incentives, and why might they be saying it.” In the broader discussion, Mosseri also touched on Instagram’s future plans, including the development of a dedicated TV app and the focus on Reels and DMs as core features, which he said simply reflect current user trends. Regarding the changing ownership of TikTok’s U.S. operations, Mosseri said that competition is positive because TikTok has forced Instagram to “do better work.” He concluded that the ownership deal itself is hard to parse, but the app’s fundamental features—the ranking system, the creators, and the general experience—will likely not change meaningfully.

Author’s Opinion

Mosseri is correctly identifying that the most critical defense against the proliferation of convincing AI-generated content is not technology—which is easily defeated—but fundamental digital literacy and human skepticism. By placing the onus on parents and society to teach children to actively question the source and incentives behind every piece of online content, he is advocating for a profound, and necessary, cultural shift. The platform’s attempt to use crowdsourced fact-checking for AI labeling is a practical recognition that human consensus, however messy, is the only sustainable guardrail against a technology that is designed to make objective truth seamless and cheap to produce.


Featured image credit: Wikimedia Commons

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *