Former executives of OpenAI, including former board members Helen Toner and Tasha McCauley, are voicing strong criticisms of the company’s governance under CEO Sam Altman.
In an Op-Ed for The Economist dated May 26, they detailed their concerns that led to Altman’s ousting in November. They cited a lack of transparency and a disregard for internal safety protocols that they believe could endanger the development of artificial general intelligence (AGI).
The former executives argue that Altman’s leadership fostered a “toxic culture of lying” and behaviors that amounted to “psychological abuse,” as stated in their piece. This behavior, combined with a reliance on self-governance, prompted their drastic steps to attempt to realign the company’s direction towards safer and more ethical practices.
The disclosure of these internal issues comes at a time when there is a broader call for enhanced regulatory oversight of AI companies, especially those like OpenAI, which initially began as a non-profit with ostensibly altruistic goals.
Amidst these revelations, another contentious practice has come under scrutiny—OpenAI’s off-boarding policy that compels departing employees to sign non-disclosure agreements (NDAs). These NDAs effectively prevent employees from expressing negative views about the company post-departure, under threat of losing equity.
This call for increased regulation is not isolated to the United States. Internationally, incidents such as a recent billion-dollar warning issued by the EU to Microsoft for failing to disclose potential risks related to its AI products highlight the global concern over the safety and ethical implications of rapidly advancing AI technologies. Additionally, the UK’s AI Safety Institute recently reported that existing safeguards for several large language models were insufficient, as they could be bypassed with malicious prompts.
The situation at OpenAI has seen significant developments following a series of high-profile resignations, including those of co-founder Ilya Sutskever and Jan Leike, head of the superalignment team. The departure of these key figures, who have raised alarms about the diminishing focus on safety in pursuit of commercially attractive products, coincides with the company’s decision to disband its internal safety team—a move that has sparked further debate about its commitment to AI safety.
Amid these controversies, OpenAI’s leadership, including Altman and Greg Brockman, president and co-founder, have publicly committed to improving safety measures. They acknowledge the challenges ahead, especially as the company moves closer to achieving AGI, and emphasize ongoing collaborations with government and stakeholders to refine their approach to AI safety.
Related News:
Featured Image courtesy of SeongJoon Cho/Bloomberg via Getty Images