DMR News

Advancing Digital Conversations

OpenAI, Google Deepmind Workers Urged to Improve Whistleblower Protections

ByHuey Yee Ong

Jun 6, 2024
OpenAI, Google Deepmind Workers Urged to Improve Whistleblower Protections

OpenAI, Google Deepmind Workers Urged to Improve Whistleblower Protections

A collective of current and former OpenAI employees from prominent AI companies such as OpenAI, Google DeepMind, and Anthropic issued an open letter, voicing concerns over the rapid advancement of artificial intelligence (AI) without adequate oversight or whistleblower protections.

The letter, published on Tuesday, June 4, emphasizes that without effective government oversight, employees are among the few who can hold these corporations accountable. However, broad confidentiality agreements often prevent them from speaking out, restricting their concerns to the companies that might be failing to address these issues.

OpenAI, Google, Microsoft, Meta, and other companies are leading a generative AI race, a market projected to surpass $1 trillion in revenue within a decade. Companies across various industries are incorporating AI-powered chatbots and agents to stay competitive.

The letter emphasized that AI companies possess substantial non-public information about their technologies, the extent of safety measures, and the risks associated with these technologies. The employees expressed concerns about the weak obligations these companies have to share information with governments and civil society.

“We understand the serious risks posed by these technologies,” they wrote, adding that companies cannot be relied upon to share this information voluntarily.

The letter also highlighted the lack of whistleblower protections in the AI industry. Employees face broad confidentiality agreements that prevent them from raising concerns except within the companies themselves. “Ordinary whistleblower protections are insufficient because they focus on illegal activity, whereas many of the risks we are concerned about are not yet regulated,” the signatories noted.

The letter urged AI companies to:

  • Refrain from enforcing non-disparagement agreements.
  • Establish anonymous processes for employees to voice concerns to boards, regulators, and others.
  • Support a culture of open criticism.
  • Avoid retaliating against public whistleblowing if internal reporting fails.

The letter was signed by four anonymous current OpenAI employees and seven former ones, including Daniel Kokotajlo, Jacob Hilton, William Saunders, Carroll Wainwright, and Daniel Ziegler. Other signatories included Ramana Kumar, formerly of Google DeepMind, and Neel Nanda, currently at Google DeepMind and formerly at Anthropic. Renowned computer scientists Geoffrey Hinton, Yoshua Bengio, and Stuart Russell also endorsed the letter.

Growing Controversy Surrounding OpenAI

In May, OpenAI reversed a controversial decision requiring former employees to choose between signing a non-disparagement agreement and keeping their vested equity. An internal memo apologized for the initial decision, acknowledging it did not reflect the company’s values.

The open letter followed OpenAI’s decision to disband its team focused on long-term AI risks, just a year after its formation. Some team members were reassigned within the company. The team disbandment came after leaders Ilya Sutskever and Jan Leike left the startup. Leike cited disagreements with OpenAI leadership over the company’s priorities and stressed the need for a safety-first approach to AGI development.

“Building smarter-than-human machines is an inherently dangerous endeavor,” Leike wrote, emphasizing the importance of a strong safety culture at OpenAI. He expressed concerns that safety had been sidelined in favor of product development.

The letter and internal disagreements come on the heels of a leadership crisis at OpenAI in November, when CEO Sam Altman was ousted by the board, only to be reinstated a week later following significant backlash from employees and investors, including Microsoft.

During May, OpenAI unveiled a new AI model along with a desktop version of ChatGPT, featuring enhanced functionalities including audio capabilities. However, the company faced controversy over one of the chatbot’s voices, “Sky,” which resembled actress Scarlett Johansson’s voice in the film “Her.” Johansson alleged that OpenAI used her voice without permission, adding to the company’s mounting challenges.

Related News:

Featured Image courtesy of NurPhoto via Getty Images

Huey Yee Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *