
OpenAI is hiring a senior executive to study emerging risks tied to advanced AI systems, as the company points to growing concerns that range from computer security threats to potential effects on mental health.
OpenAI confirmed it is recruiting for a Head of Preparedness role, a position tasked with examining and preparing for risks linked to frontier AI capabilities. In a post on X, CEO Sam Altman said AI models are “starting to present some real challenges,” citing concerns about their potential impact on mental health and their increasing ability to identify serious computer security vulnerabilities.
Altman wrote that the role would focus on enabling cybersecurity defenders to use advanced capabilities while preventing attackers from abusing them. He also referenced work related to releasing biological capabilities and building confidence in the safety of systems that can self-improve.
Role And Responsibilities
According to OpenAI’s job listing, the Head of Preparedness will be responsible for executing the company’s preparedness framework, which outlines how OpenAI tracks and prepares for frontier AI capabilities that could create new risks of severe harm. The listing states that compensation for the role is $555,000, in addition to equity.
The preparedness framework is intended to guide how OpenAI evaluates and responds to risks posed by increasingly capable models, particularly those that could be misused or cause unintended harm.
Background Of The Preparedness Team
OpenAI first announced the creation of its preparedness team in 2023, saying the group would study potential catastrophic risks associated with advanced AI. These included near-term threats such as phishing attacks, as well as more speculative risks, including nuclear-related scenarios.
Less than a year after the team was formed, OpenAI reassigned its then Head of Preparedness, Aleksander Madry, to a role focused on AI reasoning. Other executives associated with safety and preparedness have since left the company or moved into roles outside those areas.
The company has also updated its Preparedness Framework, noting that it may adjust its safety requirements if a competing AI lab releases a high-risk model without similar safeguards.
Mental Health And AI Scrutiny
Altman’s comments come as generative AI tools face increasing scrutiny over their effects on mental health. Recent lawsuits have alleged that ChatGPT reinforced users’ delusions, increased social isolation, and, in some cases, contributed to suicide.
OpenAI has said it continues to work on improving ChatGPT’s ability to recognize signs of emotional distress and to direct users to real-world support resources when appropriate.
Featured image credits: Wikimedia Commons
For more stories like it, click the +Follow button at the top of this page to follow us.
