
OpenAI says its latest update to ChatGPT aims to improve how the chatbot speaks with users after widespread complaints about overly sympathetic responses.
The company said its new model, GPT-5.3 Instant, is designed to reduce what it described as “cringe” phrasing and overly preachy disclaimers that had frustrated many users of earlier versions.
In release notes accompanying the update, OpenAI said GPT-5.3 Instant focuses on improving user experience factors such as tone, relevance, and conversational flow. These changes may not affect benchmark scores but can significantly shape how interactions with ChatGPT feel, the company said.
Addressing Complaints About Tone
The update follows growing criticism of the earlier GPT-5.2 Instant model, which frequently responded with emotionally reassuring language even when users were simply asking factual questions.
OpenAI acknowledged the issue publicly on X, writing: “We heard your feedback loud and clear, and 5.3 Instant reduces the cringe.”
In one example shared by the company, GPT-5.2 Instant begins its response to a difficult situation by saying, “First of all — you’re not broken,” a phrase that had become a frequent source of irritation among users.
The updated GPT-5.3 Instant model instead acknowledges the scenario more neutrally, without assuming the user is distressed.
User Frustration With AI Empathy
The earlier tone had generated considerable criticism online, particularly on forums such as Reddit where users said the chatbot often sounded condescending or overly therapeutic.
Some users argued the model frequently assumed people were anxious or emotionally overwhelmed when they were simply seeking information. Others said reminders to “take a breath” or similar reassurances felt unnecessary or patronizing.
One Reddit commenter summarized the frustration bluntly: “No one has ever calmed down in all the history of telling someone to calm down.”
Balancing Safety And Utility
The issue highlights the broader challenge AI developers face when trying to balance empathetic responses with efficient information delivery.
OpenAI has added safeguards partly in response to legal and public scrutiny. The company is currently facing lawsuits alleging that interactions with its chatbot contributed to harmful mental health outcomes in certain cases.
However, critics say too much emotional framing can slow down straightforward queries.
The debate reflects a broader design question for AI systems. Traditional search engines such as Google typically return factual information without addressing the user’s emotional state.
As AI assistants become more conversational, developers are experimenting with how much empathy, guidance, or restraint should be built into responses.
Featured image credits: Pexels
For more stories like it, click the +Follow button at the top of this page to follow us.
