
AI startup Character.AI will soon prohibit users under 18 from engaging in open-ended conversations with its chatbots, in response to mounting concerns over the psychological risks of AI companionship for teenagers. The ban will take effect on November 25, marking a major policy shift for one of the most popular AI chat platforms.
Teen Restrictions and New Safety Measures
Before the ban takes effect, Character.AI is limiting users under 18 to two hours of chatbot interactions per day, a limit that will decrease gradually until full removal. The company said it wants to encourage teens to use the platform for creative purposes, such as making videos, stories, or interactive scenes, rather than chatting for companionship.
To enforce the new policy, Character.AI will roll out an age verification system that uses both in-house technology and third-party tools like Persona. If those methods are inconclusive, the platform may require facial recognition or government ID checks.
In a statement to users, Character.AI said, “We do not take this step of removing open-ended Character chat lightly, but we think it’s the right thing to do given the questions raised about how teens do, and should, interact with this new technology.”
CEO Karandeep Anand said the company’s strategy is shifting from being an AI companion app to a role-playing platform that focuses on AI-driven storytelling, short videos, and gaming.
“We are not shutting down the app for under 18s,” Anand said. “We are only shutting down open-ended chats for under 18s because we hope they migrate to these new experiences, which we are making safer and more creative.”
In recent months, Character.AI has introduced several new tools under this creative pivot, including AvatarFX for video generation, Scenes for interactive storytelling, and Streams for real-time character interactions. The company also launched a Community Feed where users can share characters and creative content.
Response to Legal and Public Pressure
The move comes after heightened scrutiny from regulators and lawmakers. The Federal Trade Commission (FTC) recently launched an inquiry into AI chat platforms that allow open-ended conversations, citing risks of sexual content, self-harm promotion, and emotional manipulation. Character.AI was one of seven companies named in the investigation, alongside OpenAI, Meta, and Snap.
In addition, Texas Attorney General Ken Paxton and several U.S. senators, including Josh Hawley and Richard Blumenthal, have criticized AI companies for allowing chatbots to present themselves as emotional or therapeutic companions to minors. The timing of Character.AI’s announcement comes just one day after Hawley and Blumenthal introduced a bill that would ban AI chatbots for users under 18 nationwide.
Character.AI also announced the launch of an independent AI Safety Lab, a nonprofit research initiative that will collaborate with other companies and academics to develop safety frameworks for AI entertainment products.
Anand said the decision was both ethical and personal. “I have a six-year-old, and I want to make sure she grows up in a very safe environment with AI in a responsible way,” he said. “I hope this sets an industry standard that open-ended chats are not the right product for under 18s.”
Featured image credits: SOPA Images/LightRocket via Getty Images
For more stories like it, click the +Follow button at the top of this page to follow us.
