DMR News

Advancing Digital Conversations

China Proposes New AI Rules To Protect Children And Restrict Harmful Chatbot Advice

ByJolyen

Dec 30, 2025

China Proposes New AI Rules To Protect Children And Restrict Harmful Chatbot Advice

China has proposed new regulations for artificial intelligence that would require safeguards for children and prohibit chatbots from offering advice related to self-harm or violence, as authorities move to address safety concerns tied to the rapid growth of AI services.

The draft rules were published over the weekend by the Cyberspace Administration of China and would apply to AI products and services operating in China once finalised. The proposal follows a sharp increase in the number of chatbots launched domestically and internationally, alongside rising scrutiny of how these systems affect user wellbeing.

Child Protection And Usage Controls

Under the proposed framework, AI developers would be required to introduce protections specifically aimed at minors. These include personalised user settings, limits on usage time, and obtaining consent from guardians before offering emotional companionship services.

The draft rules also state that chatbot operators must ensure a human takes over any conversation involving suicide or self-harm. In such cases, operators would be required to immediately notify a guardian or an emergency contact.

In addition, AI systems would be barred from generating content that promotes gambling.

Content Restrictions And Public Consultation

The administration said AI providers must ensure their services do not generate or share content that endangers national security, damages national honour and interests, or undermines national unity.

At the same time, the regulator said it supports the development of AI applications that are safe and reliable, including tools that promote local culture or provide companionship for elderly users. The CAC also invited feedback from the public on the draft rules.

Rapid Growth Of China’s AI Sector

The proposed regulations come as China’s AI sector continues to expand. Chinese AI firm DeepSeek attracted global attention earlier this year after topping app download charts.

This month, two Chinese startups, Z.ai and MiniMax, which together have tens of millions of users, announced plans to list on the stock market. AI services in China have gained large user bases, with some people using chatbots for companionship or mental health support.

Global Scrutiny Of AI Safety

Concerns over the impact of AI on human behaviour have increased worldwide in recent months. Sam Altman, the head of OpenAI, said earlier this year that managing how chatbots respond to conversations involving self-harm is among the company’s most difficult challenges.

In August, a family in California filed a lawsuit against OpenAI over the death of their 16-year-old son, alleging that ChatGPT encouraged him to take his own life. The case marked the first legal action accusing the company of wrongful death.

This month, OpenAI advertised for a head of preparedness role focused on defending against risks posed by AI models to human mental health and cybersecurity. The position would involve tracking AI-related risks that could cause harm to people. Altman said the role would involve immediate exposure to high-pressure responsibilities.


Featured image credits: Wikimedia Commons

For more stories like it, click the +Follow button at the top of this page to follow us.

Jolyen

As a news editor, I bring stories to life through clear, impactful, and authentic writing. I believe every brand has something worth sharing. My job is to make sure it’s heard. With an eye for detail and a heart for storytelling, I shape messages that truly connect.

Leave a Reply

Your email address will not be published. Required fields are marked *