
Wikipedia has introduced a policy prohibiting the use of AI-generated text in its articles, while still permitting limited use of AI tools in specific editorial tasks under human oversight.
The updated rule states that large language models cannot be used to generate or rewrite article content. This replaces earlier guidance that discouraged creating full articles with AI but did not explicitly ban broader use.
Policy Update And Community Decision
The change follows ongoing debate within Wikipedia’s volunteer editor community over the role of AI in content creation. According to 404 Media, the policy was approved by a wide margin, with 40 votes in favour and two against.
The revised wording clarifies that AI tools should not be used to produce substantive article content, addressing concerns about accuracy and source reliability.
Permitted Uses Of AI In Editing
Despite the restriction, the policy allows editors to use AI for limited support tasks. Editors may use large language models to suggest basic copy edits to their own writing, provided the suggestions are reviewed and verified by humans.
The policy notes that AI systems can introduce unintended changes or alter meaning beyond the user’s request. As a result, any AI-assisted edits must not add new content and must remain consistent with cited sources.
Ongoing Debate Over AI In Media
The update reflects broader discussions across media and publishing platforms about how to manage AI-generated content. Within Wikipedia, the issue has focused on maintaining editorial standards in a system that relies on volunteer contributors and verifiable sourcing.
The new rule establishes clearer boundaries for AI use while retaining a role for tools that assist, rather than generate, written content.
Featured image credits: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.
