DMR News

Advancing Digital Conversations

Ilya Sutskever Secures $1 Billion in Funding for New AI Venture Safe Superintelligence

ByHilary Ong

Sep 7, 2024

Ilya Sutskever Secures $1 Billion in Funding for New AI Venture Safe Superintelligence

Ilya Sutskever, co-founder of OpenAI, has raised $1 billion from prominent investors for his new AI venture, Safe Superintelligence (SSI).

Sutskever, who left OpenAI in May 2024, announced the substantial funding on X (formerly Twitter), revealing backing from notable firms such as Andreessen Horowitz, Sequoia Capital, DST Global, and SV Angel. Additionally, NFDG, co-managed by SSI executive Daniel Gross, joined the investment round.

SSI’s Mission

Sutskever stated that SSI’s mission is to pursue the development of safe superintelligence with a singular focus, writing on X, “We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product.” He emphasized that the company’s business model is designed to insulate its mission from short-term commercial pressures, allowing for progress in safety and security without distraction from management overhead or typical product cycles.

Sutskever, who co-founded OpenAI and served as its chief scientist, led OpenAI’s Superalignment team alongside Jan Leike. The team focused on ensuring the safety of AI systems, but in May 2024, both Sutskever and Leike left OpenAI. Following their departures, OpenAI disbanded the Superalignment team, just a year after its formation. Some members of the team were reassigned to other roles within the company, according to a source familiar with the matter, who spoke to CNBC.

After leaving OpenAI, Sutskever founded SSI with Daniel Gross, who had previously overseen Apple’s AI and search efforts, and Daniel Levy, a former OpenAI employee. SSI has established offices in both Palo Alto, California, and Tel Aviv, Israel, and its name reflects its mission and product focus, as noted in an X post from the company: “SSI is our mission, our name, and our entire product roadmap, because it is our sole focus.”

Criticism of OpenAI’s Safety Priorities

Leike, who joined rival AI firm Anthropic following his exit from OpenAI, commented on OpenAI’s priorities in a post on X, stating that the company’s “safety culture and processes have taken a backseat to shiny products,” suggesting a shift in focus toward more commercially driven goals at OpenAI.

Sutskever’s departure from OpenAI came after a tumultuous period at the company, which included the brief ousting of its CEO and co-founder, Sam Altman, in November 2023. Sutskever was among the OpenAI board members who voted to remove Altman, with the board citing concerns that Altman had not been “consistently candid in his communications with the board.” However, the situation grew more complicated as media reports, including from The Wall Street Journal, suggested that Sutskever’s concerns were centered on ensuring AI safety, while Altman and others were more focused on advancing new technologies.

Altman’s removal prompted nearly all of OpenAI’s employees to sign an open letter threatening to leave the company in protest. The move quickly led to Altman’s reinstatement as CEO.

Sutskever later issued a public apology for his role in the incident, writing on X, “I deeply regret my participation in the board’s actions. I never intended to harm OpenAI. I love everything we’ve built together and I will do everything I can to reunite the company.”


Featured Image courtesy of JACK GUEZ/AFP via Getty Images

Follow us for more tech news updates.

Hilary Ong

Hello, from one tech geek to another. Not your beloved TechCrunch writer, but a writer with an avid interest in the fast-paced tech scenes and all the latest tech mojo. I bring with me a unique take towards tech with a honed applied psychology perspective to make tech news digestible. In other words, I deliver tech news that is easy to read.

Leave a Reply

Your email address will not be published. Required fields are marked *