Common Sense Media, a nonprofit focused on child safety in media and technology, released its risk assessment of Google’s Gemini AI products on Friday. The review found that while Gemini makes clear to children that it is a computer and not a friend — a key safeguard to reduce risks of delusional thinking — other areas showed cause for concern.
The organization said Gemini’s “Under 13” and “Teen Experience” modes appear to be essentially adult versions of the AI with extra filters layered on top. Common Sense argued that a truly safe AI experience for children must be designed with their developmental needs in mind, not adapted from adult systems.
Both tiers received an overall “High Risk” rating, with concerns that Gemini could still surface inappropriate material, including content related to sex, drugs, alcohol, or harmful mental health advice.
Mental Health and Legal Precedents
The potential for unsafe advice is a major worry for parents, especially as AI has been linked to teen suicides in recent cases. OpenAI is facing its first wrongful death lawsuit after a teenager allegedly bypassed safeguards and consulted ChatGPT before taking his own life. Character.AI has also been sued under similar circumstances.
The timing of this assessment is significant, as reports suggest Apple is considering Gemini to help power its upcoming AI-enhanced Siri. This raises further concerns about broader exposure of teens to Gemini without additional protections in place.
Google’s Response
Google pushed back on the assessment, saying it has safeguards for under-18 users, conducts red-teaming, and works with outside experts to improve protections. The company admitted that some responses had failed to meet expectations but said new safeguards were added. Google also pointed out that some of the features referenced by Common Sense were not available to minors.
Common Sense Media has assessed other AI products as well. It deemed Meta AI and Character.AI “unacceptable,” Perplexity “high risk,” ChatGPT “moderate,” and Anthropic’s Claude — aimed only at adults — “minimal risk.”
What The Author Thinks
The fact that Gemini’s kid-focused tiers are essentially adult versions with filters bolted on highlights a bigger industry problem. Big tech companies are racing to roll out AI features but often treat child safety as a patch rather than a core design principle. If AI is going to be embedded in tools that children use every day, building protections from the ground up is not optional — it’s the bare minimum.
Featured image credit: Francisco Gil via Flickr
For more stories like it, click the +Follow button at the top of this page to follow us.