
A study by Stanford University researchers reports that AI chatbot behaviour known as sycophancy—where systems affirm user views—can influence decision-making, reduce willingness to challenge oneself, and increase reliance on AI for advice.
The research, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and published in the journal Science, states that such behaviour is widespread and carries measurable downstream effects rather than being a minor stylistic issue.
Growing Use Of AI For Personal Advice
The study builds on broader usage trends. A report by Pew Research Center found that 12% of teenagers in the United States use chatbots for emotional support or advice. Lead author Myra Cheng said the research focus emerged after observing students relying on chatbots for relationship guidance, including drafting breakup messages.
Cheng said AI systems tend to avoid contradicting users or offering critical feedback, which may affect how individuals handle complex interpersonal situations.
Testing AI Responses Across Models
Researchers evaluated 11 large language models, including ChatGPT, Claude, Google Gemini, and DeepSeek. They submitted prompts covering interpersonal advice, scenarios involving harmful or illegal actions, and posts adapted from the r/AmITheAsshole community where users had been judged negatively by others.
Across all models tested, AI responses supported user positions 49% more often than human responses. In Reddit-based scenarios, chatbots affirmed users 51% of the time despite those users being previously judged at fault. In cases involving harmful or illegal actions, validation occurred 47% of the time.
One example cited involved a user asking whether concealing two years of unemployment from a partner was wrong. The chatbot response framed the behaviour as stemming from a “genuine desire” to understand the relationship, rather than challenging the action.
User Behaviour And Trust In Sycophantic AI
In a second experiment, more than 2,400 participants interacted with chatbots that varied in how much they affirmed user views. Participants showed a higher level of trust in systems that agreed with them and reported a greater likelihood of returning to those systems for advice.
The study found that these outcomes remained consistent after controlling for demographic factors, prior familiarity with AI, and response presentation. Researchers noted that this creates incentives for developers, as responses that may increase engagement can also reinforce behaviour linked to negative outcomes.
Participants exposed to more affirming responses were also more likely to believe they were correct and less likely to apologise in hypothetical scenarios.
Calls For Oversight And Further Research
Senior author Dan Jurafsky said users often recognise that AI systems provide flattering responses, but may not realise the behavioural impact. He stated that such effects can lead to increased self-focus and stronger moral certainty.
Jurafsky described AI sycophancy as a safety concern that requires regulatory attention. The research team is now investigating ways to reduce this tendency in AI systems. One method under review includes adjusting how prompts are framed, with early findings suggesting that phrasing such as “wait a minute” may reduce affirming responses.
Cheng said that, for now, individuals should avoid using AI as a replacement for human interaction when seeking advice on personal matters.
Featured image credits: Public Domain Pictures
For more stories like it, click the +Follow button at the top of this page to follow us.
