OpenAI Co-Founder Ilya Sutskever Reveals New AI Startup Safe Superintelligence

By Jose Resurreccion

Jun 19, 2024 09:16 PM EDT

OpenAI Co-Founder Ilya Sutskever Reveals New AI Startup Safe Superintelligence
Ilya Sutskever, Russian Israeli-Canadian computer scientist and co-founder and Chief Scientist of OpenAI, speaks at Tel Aviv University in Tel Aviv on June 5, 2023.
(Photo : JACK GUEZ/AFP via Getty Images)

OpenAI co-founder and former chief scientist Ilya Sutskever revealed his new startup called Safe Superintelligence, which is aimed at creating a safe and powerful artificial intelligence (AI) system within a purely research-oriented organization. 

Bloomberg reported that the Israeli-Canadian tech boss announced the firm after leaving OpenAI in mid-May due to the board coup that kicked CEO Sam Altman out before eventually letting him back in. 

Sutskever said that the new company aimed at providing "safe superintelligence" as its first product, with no near-term intention of selling AI products or services. This meant he attempted to continue his work without being distracted by competitors such as his former firm, OpenAI, Google, or Anthropic Face. 

He has also declined to comment about the company's financial backers or how much his company has already raised. 

CNBC recalled that Sutskever co-led OpenAI's Superalignment team with Jan Leike, who also left alongside him in May to join Anthropic.

READ NEXT: OpenAI Co-Founder Ilya Sutskever Quits Startup Months After Kicking Sam Altman Out in Board Coup

Sutskever Working on AI Safety

As the name implies, Sutskever's new startup aims to make AI safety its top priority while identifying what exactly makes one AI system safer than the rest of the competition. 

While Sutskever is currently vague about the concept, he did suggest that the new venture would try to achieve safety with engineering breakthroughs integrated into the AI system. 

Safe Superintelligence has three co-founders: Sutskever, investor and former Apple AI lead Daniel Gross, and OpenAI colleague Daniel Levy. 

In particular, Levy said that "the time is right" for him to work with Sutskever on AI safety. They share a similar vision of having a "small, lean, cracked team" solely focused on AI safety. 

Safe Superintelligence would be based in Palo Alto, California, and Tel Aviv. 

Safe Superintelligence was a throwback to what OpenAI was initially conceptualized as: a research firm trying to build artificial general intelligence (AGI) that could equal or surpass humans on many tasks. 

However, without further information, investors would have to gamble that Safe Superintelligence would hit breakthroughs and give it an edge over its rivals. 

READ MORE: Nvidia Stocks Surpass Microsoft in Becoming Most Valuable in Big Tech Market

© 2024 VCPOST, All rights reserved. Do not reproduce without permission.

Join the Conversation

Real Time Analytics