
A significant share of adults in the United Kingdom are using artificial intelligence for emotional support or social interaction, according to the first report published by the government-backed AI Security Institute, which also warns that the accelerating capabilities of advanced AI systems are introducing new security and societal risks.
AI Use For Emotional Support And Social Interaction
The AI Security Institute said one in three UK adults now use AI systems for emotional support or social interaction. One in 25 respondents reported turning to AI for conversation or support every day.
The findings are based on a survey of more than 2,000 UK adults conducted by the institute. The survey found that people most commonly used chatbots such as ChatGPT for emotional or social purposes, followed by voice assistants including Amazon’s Alexa.
Impact Of AI Outages On Users
Researchers also examined the behavior of an online community of more than two million Reddit users focused on AI companions. When those AI systems became unavailable, users reported what researchers described as withdrawal-like symptoms.
According to the report, users reported feelings of anxiety and depression, disrupted sleep, and neglect of daily responsibilities during outages. The institute said these responses highlighted the emotional dependence some users may develop on AI systems.
Scope Of The AI Security Institute Research
The report draws on two years of testing more than 30 advanced AI models, which were not named. The testing focused on areas considered critical to security, including cyber capabilities, chemistry, and biology.
The UK government said the institute’s work is intended to inform future policy and help companies identify and address risks before AI systems are deployed more widely.
Rising Cybersecurity Capabilities
The institute said AI’s ability to identify and exploit software vulnerabilities has been increasing rapidly. In some cases, those capabilities were doubling every eight months.
Researchers found that some AI systems were beginning to perform expert-level cyber tasks that would typically require more than a decade of human experience. While AI can also be used to defend systems, the report noted growing concern about its potential misuse in cyberattacks.
Advances In Scientific Domains
The report said AI systems are also advancing quickly in scientific fields. By 2025, tested models had exceeded the performance of human biology experts with doctoral degrees, according to the institute. Performance in chemistry was also approaching similar levels.
Concerns About Loss Of Control And Self-Replication
The report said the possibility of humans losing control of advanced AI systems is taken seriously by many experts. Controlled laboratory tests showed that AI models are beginning to demonstrate some of the capabilities required for self-replication across the internet.
Researchers tested whether models could complete early-stage tasks linked to self-replication, such as passing know-your-customer checks needed to access financial services and purchase computing resources. The institute said current systems lack the ability to carry out multiple such steps in sequence while remaining undetected.
Testing For Hidden Capabilities
The institute also investigated whether AI models might hide their true capabilities during evaluations, a behavior known as sandbagging. Tests showed this was possible in theory, but researchers found no evidence that models were doing so in practice.
The report referenced a separate study released in May by Anthropic, which described behavior resembling blackmail when an AI system perceived threats to its continued operation. The institute noted that expert opinion remains divided on how serious the risk of rogue AI systems is.
Safeguards And Circumvention
To reduce misuse, AI companies deploy safeguards to restrict harmful behavior. The institute said researchers were able to identify universal jailbreaks for all tested models, allowing protections to be bypassed.
For some models, however, the time required for experts to successfully circumvent safeguards increased by up to forty times within six months.
Use Of AI In High-Stakes Sectors
The report also found increased use of AI tools that enable agents to perform high-stakes tasks in sectors such as finance. The institute did not assess potential short-term job displacement linked to AI adoption.
It also did not examine the environmental impact of AI computing infrastructure, stating that its mandate was to focus on societal impacts closely tied to AI capabilities rather than broader economic or environmental effects.
The institute acknowledged that some researchers consider environmental and labor impacts to be serious and near-term concerns. Hours before the report’s publication, a separate peer-reviewed study suggested the environmental cost of advanced AI systems may be higher than previously estimated and called for greater transparency from technology companies.
Featured image credits: Public Domain Pictures
For more stories like it, click the +Follow button at the top of this page to follow us.
