OpenAI co-founder and former chief scientist Ilya Sutskever revealed his new startup called Safe Superintelligence, which is aimed at creating a safe and powerful artificial intelligence (AI) system within a purely research-oriented organization.
Bloomberg reported that the Israeli-Canadian tech boss announced the firm after leaving OpenAI in mid-May due to the board coup that kicked CEO Sam Altman out before eventually letting him back in.
Sutskever said that the new company aimed at providing "safe superintelligence" as its first product, with no near-term intention of selling AI products or services. This meant he attempted to continue his work without being distracted by competitors such as his former firm, OpenAI, Google, or Anthropic Face.
He has also declined to comment about the company's financial backers or how much his company has already raised.
CNBC recalled that Sutskever co-led OpenAI's Superalignment team with Jan Leike, who also left alongside him in May to join Anthropic.
READ NEXT : OpenAI Co-Founder Ilya Sutskever Quits Startup Months After Kicking Sam Altman Out in Board Coup
Sutskever Working on AI Safety
As the name implies, Sutskever's new startup aims to make AI safety its top priority while identifying what exactly makes one AI system safer than the rest of the competition.
While Sutskever is currently vague about the concept, he did suggest that the new venture would try to achieve safety with engineering breakthroughs integrated into the AI system.
Safe Superintelligence has three co-founders: Sutskever, investor and former Apple AI lead Daniel Gross, and OpenAI colleague Daniel Levy.
In particular, Levy said that "the time is right" for him to work with Sutskever on AI safety. They share a similar vision of having a "small, lean, cracked team" solely focused on AI safety.
Safe Superintelligence would be based in Palo Alto, California, and Tel Aviv.
Safe Superintelligence was a throwback to what OpenAI was initially conceptualized as: a research firm trying to build artificial general intelligence (AGI) that could equal or surpass humans on many tasks.
However, without further information, investors would have to gamble that Safe Superintelligence would hit breakthroughs and give it an edge over its rivals.
Join the Conversation