Home / News / Technology / Safe Superintelligence Inc. – Who Runs The AI Safety Company and Why did They Push For Sam Altman Firing ?
Technology
7 min read

Safe Superintelligence Inc. – Who Runs The AI Safety Company and Why did They Push For Sam Altman Firing ?

Last Updated 21 seconds ago
Teuta Franjkovic
Last Updated 21 seconds ago

Key Takeaways

  • Ilya Sutskever co-founded Safe Superintelligence, focusing on developing safe superintelligent machines.
  • This new venture is a research-focused organization that emphasizes the responsible development of AI.
  • Despite being a high-risk investment, Safe Superintelligence is likely to attract significant funding.

Ilya Sutskever, OpenAI co-founder and chief scientist, played a key role in the November removal of Sam Altman, the former CEO, and has now co-founded a new artificial intelligence company.

This new venture, named Safe Superintelligence , aims to develop superintelligent machines—those surpassing human intelligence—while ensuring their safety.

OpenAI Co-Founder Ilya Sutskever Launches Safe Superintelligence Inc.

Sutskever departed from OpenAI last month  and announced he would be embarking on a new project, though he did not share specifics at the time. The company’s spokeswoman, Lulu Cheng Meservey declined to disclose the company’s funding sources or the amount raised. She stated  that the company, focused on building safe superintelligence, will not be releasing other products during its development phase.

Safe Superintelligence Inc. is a new venture focused on developing a powerful and safe artificial intelligence system. Operating as a pure research organization, it has no plans to sell AI products or services in the near term. This approach allows Sutskever to continue his work without the commercial distractions faced by rivals like OpenAI, Google, and Anthropic.

He stated :

“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then. It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”

Given Sutskever’s legendary status in the AI industry, his uncertain status has captivated Silicon Valley for months. As a university researcher and later a scientist at Google, he significantly contributed to several key AI advancements. His early involvement with OpenAI was instrumental in attracting top talent, which has been vital to its success. Sutskever became a prominent advocate for developing larger models, a strategy that propelled OpenAI ahead of Google and was crucial to the success of ChatGPT.

Who Is Ilya Sutskever?

Ilya Sutskever is a prominent figure in AI, known for co-authoring the groundbreaking AlexNet paper  that revolutionized deep learning. Born in Soviet Russia in 1986 and raised in Jerusalem, he studied at the Open University of Israel and the University of Toronto, where he earned his Ph.D. in computer science.

After a significant breakthrough in pattern recognition with AlexNet, he joined Google , contributing to projects like TensorFlow.

In 2015, he co-founded OpenAI with Elon Musk  and Sam Altman, serving as chief scientist. Sutskever has been a strong advocate for AI safety, leading OpenAI’s Superalignment team to ensure AI systems remain secure. His cautious approach to AI development eventually led to a conflict with Altman, who favored a faster pace in advancing AI capabilities.

OpenAI’s ChatGPT Launches Generative AI Revolution Amid Leadership Turmoil

In November 2022, OpenAI made headlines with the debut of ChatGPT , a chatbot that could answer questions, write essays, generate computer code, and simulate human conversation. This marked a significant milestone for generative artificial intelligence, which the tech industry quickly adopted for its ability to create text, images, and other forms of media.

Many experts anticipate that these technologies will transform various domains, from email programs to internet search engines and digital assistants, potentially having an impact as profound as the invention of the web browser or the smartphone.

Amid this, The New York Times has filed a lawsuit  against OpenAI and its partner, Microsoft, alleging copyright infringement related to AI systems.

Sam Altman became a prominent advocate for generative AI, meeting with lawmakers, regulators, and investors worldwide, and testifying before Congress. However, in November, he was unexpectedly removed by Ilya Sutskever and three other OpenAI board members, who expressed a lack of trust in his leadership regarding the company’s goal to develop a machine with human-like capabilities.

Following the ouster, hundreds of OpenAI employees threatened to resign, prompting Sutskever to express regret. Altman was eventually reinstated as CEO after an agreement was reached to replace two board members with Bret Taylor, a former Salesforce executive, and Lawrence Summers, a former US Treasury secretary. Sutskever then effectively stepped down from the board.

Last year, Sutskever helped establish a Superalignment team within OpenAI, aimed at ensuring future AI technologies would be safe and not cause harm. He, like many others in the field, had grown increasingly worried about the potential dangers of AI, including the risk of it becoming a threat to humanity.

Jan Leike, who co-led the Superalignment team with Dr. Sutskever, has also resigned from OpenAI. He has since joined Anthropic, a company founded by former OpenAI researchers and a competitor in the AI space.

Reviving OpenAI’s Original Vision with High Stakes for Investors

According to  Sutskever, Safe Superintelligence harkens back to the original vision of OpenAI: a research-focused organization dedicated to developing artificial general intelligence (AGI) capable of matching or surpassing human abilities across various tasks.

As OpenAI grew, it had to evolve, forging a close partnership with Microsoft Corp.  to secure the necessary funding for its immense computing power needs. This shift also pushed OpenAI towards creating revenue-generating products. Major AI companies face a similar challenge, balancing the need for substantial computational resources as AI models rapidly grow in complexity.

 

Sutskever’s new venture, Safe Superintelligence, includes two co-founders: Daniel Gross, an investor and former Apple Inc. AI lead known for backing high-profile AI startups like Keen Technologies, and Daniel Levy, who earned a strong reputation for training large AI models while working alongside Sutskever at OpenAI.

This economic landscape makes Safe Superintelligence  a high-risk venture for investors, who must believe in Sutskever and his team’s potential to achieve breakthroughs that outpace more established competitors. These investors are essentially betting on long-term success without expecting immediate, profitable products. Moreover, the feasibility of developing “superintelligence”—a level of AI significantly beyond the human-like AI most tech giants aim for—remains uncertain. There is no industry consensus on whether such an intelligence is attainable or the methods to build it.

Despite these challenges, Safe Superintelligence is nonetheless expected to attract significant investment due to the impressive credentials of its founders and the growing interest in advanced AI. As Gross points out, “Out of all the problems we face, raising capital is not going to be one of them.”

Was this Article helpful? Yes No