DMR News

Advancing Digital Conversations

OpenAI Announces New Safety Team with Sam Altman in Charge

ByHuey Yee Ong

May 30, 2024

OpenAI Announces New Safety Team with Sam Altman in Charge

OpenAI has initiated a significant restructuring in its oversight approach by establishing a new safety and security committee. This move was announced on Tuesday, May 28, shortly after the company dissolved its previous team dedicated to examining the long-term risks associated with artificial intelligence.

The newly formed committee is charged with guiding critical safety and security decisions regarding OpenAI’s projects and operational strategies. This strategic pivot reflects OpenAI’s ongoing commitment to advance safely in the field of AI, particularly as they embark on developing their “next frontier model”.

According to a recent blog post by OpenAI, this model is expected to enhance their capabilities significantly, steering them closer to achieving Artificial General Intelligence (AGI), an AI system with human-level or superior intellect.

Members of the New Safety Committee

The safety committee is composed of prominent figures including Sam Altman, Bret Taylor, Adam D’Angelo, and Nicole Seligman, all of whom are members of OpenAI’s board of directors. Their combined expertise is anticipated to steer the organization towards integrating robust safety measures in all its AI endeavors.

This restructuring comes in the wake of the dissolution of a former oversight group that concentrated on the potential long-term dangers posed by AI. The decision to reorganize safety oversight was influenced by critiques regarding the previous focus on product development potentially overshadowing the importance of safety protocols.

Ilya Sutskever and Jan Leike, former leaders of the now-dissolved team, left the company earlier this month. In response to Leike’s departure and his criticism that OpenAI had prioritized “shiny products” over stringent safety measures, Altman expressed his regret on social media platform X, acknowledging that the company still has significant strides to make in this arena.

In the upcoming 90 days, the new safety group is set to evaluate OpenAI’s current processes and safeguards, with plans to present their findings and recommendations to the board. An update on the implemented recommendations is expected to be shared subsequently.

The focus on AI safety is becoming increasingly critical as the underlying models that power applications like ChatGPT continue to evolve and become more complex. The anticipation of achieving AGI brings with it a mix of excitement and concern, prompting AI developers and the broader tech community to contemplate the implications and necessary precautions for future advancements.


Related News:


Featured Image courtesy of ELIZABETH FRANTZ/REUTERS

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *