Key Takeaways
Ilya Sutskever, OpenAI co-founder and chief scientist, played a key role in the November removal of Sam Altman, the former CEO, and has now co-founded a new artificial intelligence company.
This new venture, named Safe Superintelligence , aims to develop superintelligent machines—those surpassing human intelligence—while ensuring their safety.
Sutskever departed from OpenAI last month and announced he would be embarking on a new project, though he did not share specifics at the time. The company’s spokeswoman, Lulu Cheng Meservey declined to disclose the company’s funding sources or the amount raised. She stated that the company, focused on building safe superintelligence, will not be releasing other products during its development phase.
Superintelligence is within reach.
Building safe superintelligence (SSI) is the most important technical problem of our time.
We've started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence.
It’s called Safe Superintelligence…
— SSI Inc. (@ssi) June 19, 2024
Safe Superintelligence Inc. is a new venture focused on developing a powerful and safe artificial intelligence system. Operating as a pure research organization, it has no plans to sell AI products or services in the near term. This approach allows Sutskever to continue his work without the commercial distractions faced by rivals like OpenAI, Google, and Anthropic.
He stated :
“This company is special in that its first product will be the safe superintelligence, and it will not do anything else up until then. It will be fully insulated from the outside pressures of having to deal with a large and complicated product and having to be stuck in a competitive rat race.”
Given Sutskever’s legendary status in the AI industry, his uncertain status has captivated Silicon Valley for months. As a university researcher and later a scientist at Google, he significantly contributed to several key AI advancements. His early involvement with OpenAI was instrumental in attracting top talent, which has been vital to its success. Sutskever became a prominent advocate for developing larger models, a strategy that propelled OpenAI ahead of Google and was crucial to the success of ChatGPT.
Ilya Sutskever is a prominent figure in AI, known for co-authoring the groundbreaking AlexNet paper that revolutionized deep learning. Born in Soviet Russia in 1986 and raised in Jerusalem, he studied at the Open University of Israel and the University of Toronto, where he earned his Ph.D. in computer science.
After a significant breakthrough in pattern recognition with AlexNet, he joined Google , contributing to projects like TensorFlow.
In 2015, he co-founded OpenAI with Elon Musk and Sam Altman, serving as chief scientist. Sutskever has been a strong advocate for AI safety, leading OpenAI’s Superalignment team to ensure AI systems remain secure. His cautious approach to AI development eventually led to a conflict with Altman, who favored a faster pace in advancing AI capabilities.
Elon Musk was worried that Microsoft would take control of OpenAI
👀pic.twitter.com/V09usVB4Ut— Kris Kashtanova (@icreatelife) November 20, 2023
In November 2022, OpenAI made headlines with the debut of ChatGPT , a chatbot that could answer questions, write essays, generate computer code, and simulate human conversation. This marked a significant milestone for generative artificial intelligence, which the tech industry quickly adopted for its ability to create text, images, and other forms of media.
Many experts anticipate that these technologies will transform various domains, from email programs to internet search engines and digital assistants, potentially having an impact as profound as the invention of the web browser or the smartphone.
Amid this, The New York Times has filed a lawsuit against OpenAI and its partner, Microsoft, alleging copyright infringement related to AI systems.
Sam Altman became a prominent advocate for generative AI, meeting with lawmakers, regulators, and investors worldwide, and testifying before Congress. However, in November, he was unexpectedly removed by Ilya Sutskever and three other OpenAI board members, who expressed a lack of trust in his leadership regarding the company’s goal to develop a machine with human-like capabilities.
Following the ouster, hundreds of OpenAI employees threatened to resign, prompting Sutskever to express regret. Altman was eventually reinstated as CEO after an agreement was reached to replace two board members with Bret Taylor, a former Salesforce executive, and Lawrence Summers, a former US Treasury secretary. Sutskever then effectively stepped down from the board.
I deeply regret my participation in the board's actions. I never intended to harm OpenAI. I love everything we've built together and I will do everything I can to reunite the company.
— Ilya Sutskever (@ilyasut) November 20, 2023
Jan Leike, who co-led the Superalignment team with Dr. Sutskever, has also resigned from OpenAI. He has since joined Anthropic, a company founded by former OpenAI researchers and a competitor in the AI space.
According to Sutskever, Safe Superintelligence harkens back to the original vision of OpenAI: a research-focused organization dedicated to developing artificial general intelligence (AGI) capable of matching or surpassing human abilities across various tasks.
As OpenAI grew, it had to evolve, forging a close partnership with Microsoft Corp. to secure the necessary funding for its immense computing power needs. This shift also pushed OpenAI towards creating revenue-generating products. Major AI companies face a similar challenge, balancing the need for substantial computational resources as AI models rapidly grow in complexity.
We're rolling out interactive tables and charts along with the ability to add files directly from Google Drive and Microsoft OneDrive into ChatGPT. Available to ChatGPT Plus, Team, and Enterprise users over the coming weeks. https://t.co/Fu2bgMChXt pic.twitter.com/M9AHLx5BKr
— OpenAI (@OpenAI) May 16, 2024
Sutskever’s new venture, Safe Superintelligence, includes two co-founders: Daniel Gross, an investor and former Apple Inc. AI lead known for backing high-profile AI startups like Keen Technologies, and Daniel Levy, who earned a strong reputation for training large AI models while working alongside Sutskever at OpenAI.
This economic landscape makes Safe Superintelligence a high-risk venture for investors, who must believe in Sutskever and his team’s potential to achieve breakthroughs that outpace more established competitors. These investors are essentially betting on long-term success without expecting immediate, profitable products. Moreover, the feasibility of developing “superintelligence”—a level of AI significantly beyond the human-like AI most tech giants aim for—remains uncertain. There is no industry consensus on whether such an intelligence is attainable or the methods to build it.
We will pursue safe superintelligence in a straight shot, with one focus, one goal, and one product. We will do it through revolutionary breakthroughs produced by a small cracked team. Join us: https://t.co/oYL0EcVED2
— Ilya Sutskever (@ilyasut) June 19, 2024
Despite these challenges, Safe Superintelligence is nonetheless expected to attract significant investment due to the impressive credentials of its founders and the growing interest in advanced AI. As Gross points out, “Out of all the problems we face, raising capital is not going to be one of them.”