
Richard Socher, known for founding chatbot startup You.com and for earlier work tied to ImageNet, has launched a new AI startup called Recursive Superintelligence, or RSI, with $650 million in funding. The San Francisco-based company emerged from stealth on Wednesday with a team of researchers focused on building AI systems capable of recursively improving themselves without direct human involvement.
Socher is joined by several prominent AI researchers and executives, including Peter Norvig, Cresta co-founder Tim Shi, Tim Rocktäschel, and former OpenAI researcher Josh Tobin. The company’s primary research goal centers on creating what Socher described as “truly recursive, self-improving superintelligence at scale.”
Focus On Recursive Self-Improvement
In an interview conducted over Zoom following the launch, Socher said the company’s approach differs from existing forms of AI-assisted improvement because it aims to automate the entire research cycle.
“Our main focus is to build truly recursive, self-improving superintelligence at scale,” Socher said. He explained that the process would eventually automate ideation, implementation, and validation of research ideas without human intervention.
According to Socher, current AI systems can improve outputs or optimize tasks, but that does not qualify as recursive self-improvement.
“A lot of people already assume it happens when you just do auto-research,” he said. “But that’s not recursive self-improvement. That’s just improvement.”
He said the long-term goal involves AI systems identifying their own weaknesses, redesigning themselves, and continuously improving through autonomous iteration. The research initially focuses on AI systems improving AI research itself before potentially expanding into broader scientific or physical domains.
Open-Endedness As A Core Concept
Socher said Recursive Superintelligence is heavily focused on a concept known as “open-endedness,” which he described as a key differentiator between the company and other AI labs.
Tim Rocktäschel, one of the company’s co-founders, previously led open-endedness and self-improvement teams at Google DeepMind. Socher referenced Genie 3, a DeepMind world model project, as an example of the concept in practice.
According to Socher, open-ended systems can continuously generate new environments, concepts, or agents without fixed endpoints, similar to biological evolution.
“In biological evolution, animals adapt to the environment, and then others counter-adapt to those adaptations,” he said. “It’s just a process that can evolve for billions of years.”
He also discussed “rainbow teaming,” a technique developed by Rocktäschel that builds on traditional red teaming methods used in cybersecurity and AI safety testing.
In standard red teaming, humans attempt to expose unsafe or harmful behavior in AI systems. Socher said rainbow teaming instead uses one AI system to continuously challenge another AI system through repeated automated interactions.
“What if you tested this first AI with a second AI,” Socher said, “and that second AI now has the task of making the first AI say all the possible bad things.”
According to Socher, the systems can iterate against each other millions of times, generating many attack approaches simultaneously. He said the technique is now used across major AI labs.
Positioning Beyond The ‘Neolab’ Label
The startup arrives during a period when several research-heavy AI startups have emerged with goals tied to advanced reasoning systems and artificial general intelligence. These firms are sometimes referred to informally as “neolabs,” a term used for AI companies prioritizing frontier research over immediate commercial products.
Socher said he does not fully identify with the label.
“I feel like we’re not just a lab,” he said. “I want us to become a really viable company, to really have amazing products that people love to use.”
He argued that the company’s team combines long-term AI research experience with operational product development backgrounds. He pointed to Tim Shi’s work building Cresta into a unicorn company and Josh Tobin’s previous leadership roles at OpenAI, including work on Codex and deep research systems.
When asked whether Recursive Superintelligence believes major AI labs are unlikely to achieve recursive self-improvement using their current methods, Socher said the company is taking a different approach focused heavily on open-endedness research.
“And the team has been researching this and doing papers in this space for the last decade,” he said.
Product Timeline And Compute Demands
Although the company remains focused on research, Socher said products are planned sooner than initially expected.
“The team has made so much progress, we may actually pull up the timelines from what we had initially assumed,” he said. “There will be products, and you’ll have to wait quarters, not years.”
Socher also discussed the long-term role of computing power in recursive self-improving systems. He said compute resources could eventually become one of the most important constraints for advanced AI development.
“One of the biggest questions in the world” may eventually become how humanity allocates computing resources to different scientific or medical problems, he said.
“Here’s this cancer and here’s that virus,” Socher said. “Which one do you want to solve first? How much compute do you want to give it?”
Featured image credits: Magnific.com
For more stories like it, click the +Follow button at the top of this page to follow us.
