Safe Superintelligence, co-founded by OpenAI's former chief scientist raises $1 billion

SSI aims to achieve 'safe superintelligence in a straight shot,' a goal that has attracted high-profile investors

Safe Superintelligence (SSI), an AI startup co-founded by OpenAI former chief scientist Ilya Sutskever, raised $1 billion in funding, valuing the company at $5 billion. SSI aims to achieve "safe superintelligence in a straight shot," a goal that has attracted high-profile investors including Andreessen Horowitz, Sequoia Capital, DST Global, SV Angel, and NFDG, operated by Nat Friedman and Daniel Gross.
The funds raised will be directed toward the development of AI systems that can surpass human capabilities, with a strong emphasis on safety. According to the Daily Mail, Sutskever stated that SSI will focus on R&D before any market introduction. The company is in the process of establishing a small, trusted team comprising researchers and engineers in Palo Alto, California and Tel Aviv, currently employing 10 individuals.
1 View gallery
המדען הראשי של OpenAI ד"ר איליה סוצקבר והמנכ"ל סם אלטמן באוניברסיטת תל אביב לפני שנה
המדען הראשי של OpenAI ד"ר איליה סוצקבר והמנכ"ל סם אלטמן באוניברסיטת תל אביב לפני שנה
OpenAI CEO Sam Altman and SSI founder Ilya Sutskever
(Photo: Avigail Uzi)
To meet its computing requirements, SSI plans to partner with cloud providers and chip companies, although specific firms have yet to be decided. Daniel Gross, who previously led AI initiatives at Apple, will oversee computing power and fundraising at SSI.
Sutskever was an early proponent of the scaling hypothesis, which suggests that AI models significantly improve with substantial computing power. However, he has indicated that he will approach scaling differently than his former employer, though he has not shared specific details. At SSI, Sutskever has emphasized a singular focus on AI development without the distractions of management overhead or product cycles.
AI safety is a crucial issue amid fears that rogue AI may act against humanity's interests. A California bill proposing safety regulations is opposed by companies like OpenAI and Google, but supported by Anthropic and xAI.
Following Sutskever's departure from OpenAI, the company dismantled his "Superalignment" team that focused on ensuring AI aligns with human values to prepare for advanced AI. Sutskever has indicated a strategic shift away from OpenAI, noting a "mountain that's a bit different from what I was working on". He said that working differently would allow the potential to "do something special" in AI development.
In contrast to OpenAI's unorthodox corporate structure, which made Sam Altman's ouster possible, Safe Superintelligence has adopted a regular for-profit structure.
Daniel Levy, a former OpenAI researcher, currently serves as a principal scientist at SSI. The company is focused on hiring individuals who fit its culture, emphasizing "good character" and extraordinary capabilities over credentials and experience.
"One thing that excites us is when you find people that are interested in the work, that are not interested in the scene, in the hype," Daniel Gross said.
Sutskever's departure from OpenAI was marked by controversy. Nearly all OpenAI employees signed an open letter demanding the return of Sam Altman and the resignation of the board following his ouster. Sutskever himself expressed regret about his role in the board's decision, saying, "I never intended to harm OpenAI. I love everything we've built together, and I will do everything I can to reunite the company."
The OpenAI board had previously stated that Altman had not been "consistently candid in his communications with the board," which led to his ouster over a "breakdown of communications."
This article was written in collaboration with Generative AI news company Alchemiq
Sources: Economic Times, Daily Mail Online, CNBC, Business Insider, Bloomberg.
<< Follow Ynetnews on Facebook | Twitter | Instagram >>
Comments
The commenter agrees to the privacy policy of Ynet News and agrees not to submit comments that violate the terms of use, including incitement, libel and expressions that exceed the accepted norms of freedom of speech.
""