DMR News

Advancing Digital Conversations

X Launches Pilot Program Allowing AI Chatbots to Create Community Notes

ByHilary Ong

Jul 5, 2025

X Launches Pilot Program Allowing AI Chatbots to Create Community Notes

The social platform X is testing a new feature that allows AI chatbots to generate Community Notes — the crowd-sourced fact-checking comments that provide additional context on posts.

Originally introduced during Twitter’s era and expanded under Elon Musk’s leadership at X, Community Notes enable users to contribute context to posts that may be misleading or unclear. These notes are then reviewed by other community members before being attached publicly to the post. For example, a Community Note might clarify that a video is AI-generated or correct misinformation from a public figure.

Notes only become visible once consensus is reached among diverse groups of raters with varying perspectives, ensuring a balance in fact-checking.

Industry Influence and AI Integration

Community Notes have inspired similar initiatives by Meta, TikTok, and YouTube. Meta, for instance, replaced its third-party fact-checking programs with this more cost-effective, community-driven approach.

Now, X is exploring whether AI chatbots can contribute to these fact-checking efforts. AI-generated notes can come from X’s own AI assistant, Grok, or other large language models (LLMs) connected through an API. Importantly, AI-submitted notes undergo the same vetting and approval process as human submissions.

Challenges and Considerations with AI Fact-Checking

Relying on AI for fact-checking raises concerns because AI systems frequently “hallucinate,” or generate inaccurate or fabricated information. A recent paper by researchers working on X Community Notes emphasizes the need for humans and AI to collaborate. Human reviewers provide essential oversight to improve AI-generated notes through reinforcement learning and act as the final checkpoint before publication.

The researchers highlighted, “The goal is not to create an AI assistant that tells users what to think, but to build an ecosystem that empowers humans to think more critically and understand the world better. LLMs and humans can work together in a virtuous loop.”

However, risks remain. With the possibility of users embedding third-party LLMs, varying AI behaviors could affect note accuracy. For instance, some AI models may prioritize being “helpful” over factual correctness, potentially leading to misinformation in notes. Additionally, an influx of AI-generated submissions might overwhelm human moderators, impacting the quality and timeliness of reviews.

X plans to pilot AI-generated Community Notes for several weeks to evaluate their effectiveness before considering a broader rollout. Users should not expect to see these AI notes immediately, as the company aims to ensure quality and reliability first.

Author’s Opinion

The integration of AI into community fact-checking is a promising step toward managing misinformation at scale, but it cannot replace human critical thinking and oversight. Overreliance on AI risks spreading inaccuracies, especially given AI’s tendency to hallucinate. The best path forward is a balanced approach where AI assists but humans remain the ultimate gatekeepers to ensure integrity and trust.


Featured image credit: juicy_fish via Freepik

For more stories like it, click the +Follow button at the top of this page to follow us.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *