OPENAI CO-FOUNDER LAUNCHED NEW AI COMPANY TO SAFELY HANDLE THE FUTURE OF SUPERINTELLIGENT AI

Ilya Sutskever, former board member of OpenAI has announced the launch of a new artificial intelligence company named SAFE SUPERINTELLIGENT INC., dedicated to the safe advancement of ‘superintelligence’, a type of AI systems that are super intelligent and could surpass humans. This new project aims to respond to growing concerns about the potential risks associated with superintelligent AI systems, harnessing their transformative potential for the benefit of humanity.

A new chapter in AI development

OpenAI Co-Founder, Ilya Sutskever who left the company last month, always emphasized on Ethical and Safe AI development has launched his own company. Sutskever took to twitter to announce that he has entered a new role with a new company called ‘Safe SuperIntelligence’. This company is poised to take major steps to ensure that the future of artificial intelligence is both innovative and secure. This company as suggested by Sutskever himself is, ‘World’s first Straight-Shot SSI lab’.

Areas of focus

The company is found with a “Singular Focus”. In one of his recent posts on X

, he posted that

“Our singular focus means no distraction by management overhead or product cycles, and our business model means safety, security, and progress are all insulated from short-term commercial pressures.”

This underlines the vision of the company and provides a road-map for the working of company clearly.

Leadership and Vision

SSI leadership team includes some of the brightest minds in the AI ​​industry, bringing a wealth of experience and knowledge to the table. The Company is found by Ilya Sutskever, along with Danielle Levy who was managing Apple’s AI and Research Efforts, and Daniel Gross, former employee of OpenAI. All three of them envision SSI to be a role model in the AI ​​industry and show that advanced research in AI can continue, putting safety and ethical aspects first.

Why Secure Superintelligence Matters?

Superintelligence refers to artificial intelligence systems that surpass human intelligence in every way. While the potential benefits of such systems are enormous—from solving complex scientific problems to revolutionizing industry—they also come with significant risks. Superintelligent artificial intelligence can potentially take decisions or actions that harm the interests of humans unless it is properly managed and aligned with human values. This is where Safe SuperIntelligence aims to make a difference.

Conclusion

SSI’s mission to ensure the safe development of superintelligent artificial intelligence is a testament to the growing recognition of the ethical and security challenges posed by artificial intelligence technologies. Led by Ilya Sutskever, Danielle Levy and Daniel Gross, the new company is well positioned to lead the creation of AI systems that are both efficient and compatible with human values. Looking to the future, SSI’s work is essential to shaping a world where AI is a force for good, enhancing human capabilities and protecting our values ​​and interests.

REFERENCES:

https://www.business-standard.com/technology/tech-news/openai-founder-sets-up-new-ai-company-devoted-to-safe-superintelligence-124062000666_1.html

https://apnews.com/article/openai-sutskever-altman-artificial-intelligence-safety-c6b48a3675fb3fb459859dece2b45499

https://www.theweek.in/wire-updates/international/2024/06/20/fgn29-openai-founder-new-ai-company.html

https://www.ptinews.com/story/international/openai-founder-sutskever-sets-up-new-ai-company-devoted-to-safe-superintelligence-/1596748