Live Chat

Crypto News

Cryptocurrency News 5 months ago
ENTRESRUARPTDEFRZHHIIT

OpenAI Veterans and Y Combinator Partner Launch 'Safe Superintelligence, Inc.' to Enhance AI Safety

Algoine News
Summary:
Ilya Sutskever, former chief scientist of OpenAI, and Daniel Levy, an ex-OpenAI engineer, have collaborated with Daniel Gross, ex-partner of startup accelerator Y Combinator, to launch Safe Superintelligence, Inc. (SSI). The US-based company aims to enhance artificial intelligence (AI) safety and capabilities. Alongside worries about AI safety, former OpenAI employees and major tech figures, including Ethereum co-founder Vitalik Buterin, Tesla CEO Elon Musk, and Apple co-founder Steve Wozniak, have called for a pause in AI training to consider the potential risks.
Ilya Sutskever, the former chief scientist and co-founder of OpenAI, along with Daniel Levy, the former OpenAI engineer, have teamed up with Daniel Gross, a previous partner at startup accelerator Y Combinator, to launch Safe Superintelligence, Inc. (SSI). This new venture's focus and product are reflected in its title. SSI, with a presence in Palo Alto and Tel Aviv in the United States, aims to push artificial intelligence (AI) forward by conjointly developing safety measures and capabilities, as announced by the founding trio on June 19th online. They stressed their unique focus on avoiding the distractions caused by management matters or product cycles, with a business model that ensures safety, security, and progress are shielded from short-term commercial influences. Prior to leaving OpenAI on May 14, Sutskever, together with Gross, shared concerns regarding AI safety. Sutskever played a vague role in the firm after leaving its board following the reinstatement of CEO Sam Altman. Daniel Levy was among the researchers who departed from OpenAI shortly after Sutskever. Sutskever and Jan Leike spearheaded OpenAI's Superalignment team, formed in July 2023, to examine how to direct and control AI systems more intelligent than us, referred to as artificial general intelligence (AGI). During its inception, OpenAI assigned 20% of its computational capacity to this team. However, Leike also left the organization in May and now leads a team at Anthropic, an AI startup backed by Amazon. The company argued its safety precautions in a detailed X post from company president Greg Brockman but disbanded the Superalignment team after its researchers left in May. The ex-OpenAI investigators are not alone in expressing apprehension about AI's future course. Amidst staff shuffling at OpenAI, Ethereum co-founder Vitalik Buterin called AGI "risky". However, he also shared that such models are considerably less of a "doom risk" than corporate greed and armed forces. Ex-OpenAI supporter, Tesla CEO Elon Musk, and Apple co-founder Steve Wozniak joined over 2,600 tech experts and researchers requesting a six-month halt on AI system training to reflect on the "profound risk" it carries. SSI's launch announcement also mentioned the company is looking to recruit researchers and engineers.

Published At

6/20/2024 12:45:47 AM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

๐Ÿš€ Algoine is in Public Beta! ๐ŸŒ We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! ๐Ÿ’ก #AlgoineBetaLaunch