Live Chat

Crypto News

Cryptocurrency News 7 months ago
ENTRESRUARPTDEFRZHHIIT

UK's AI Safety Institute Expands to the U.S., Reveals Safety Test Findings

Algoine News
Summary:
The UK Institute for Artificial Intelligence (AI) Safety is launching its first international office in San Francisco, an effort to harness tech talent and engage with leading AI labs. The expansion aims to solidify relationships with significant U.S. players, fostering global AI safety. The Institute also revealed results from safety testing of five advanced AI models, showing potential in tackling cybersecurity challenges and profound knowledge in chemistry and biology - though also demonstrating high vulnerability to simple jailbreaks and a need for human supervision for complex tasks.
In a bid to extend its global reach, the United Kingdom's Institute for Artificial Intelligence (AI) Safety will set up an overseas branch in the United States. The UK Technology Secretary, Michelle Donelan, broke the news on May 20, indicating that San Francisco had been chosen as the site for the first abroad office, set to open its doors this summer. The strategic positioning of the institute in San Francisco offers an opportunity for the UK to harness the abundant tech talent within the Bay Area and engage with one of the largest AI labs situated between London and San Francisco. The institute expressed its hopes to solidify relationships with significant American players, encouraging worldwide AI safety geared towards public interests. The London-based AI Safety Institute boasts a workforce of 30 personnel who are poised to expand and acquire more knowledge, primarily regarding risk assessment for cutting-edge AI models. Donelan regarded the extension as a testament to the UK's pioneering and strategic insight into AI safety. She reinforced the country's commitments to studying AI's risks and potential benefits on a global scale, strengthening the bond with the US, and paving the way for other nations to delve into their expertise. The announcement succeeds the significant AI Safety Summit, held in London back in November 2023. The summit, a first of its kind, focused on AI safety on a worldwide scale. Prominent leaders across the globe, such as Microsoft President Brad Smith, OpenAI CEO Sam Altman, Google and DeepMind CEO Demis Hassabiss, Elon Musk, alongside leaders from the US and China, graced the event. The UK also disclosed some of the institute's previous safety tests results on five advanced AI models open to the public. The models were anonymized, and the results represent a glimpse of the model's capabilities rather than deeming them risk-free or hazardous. Some of the outcome revealed that while some models were equipped to tackle cybersecurity issues, they grappled with the more challenging ones. A few models exhibited deep knowledge of chemistry and biology at the PhD level. However, all examined models showed a high susceptibility to simple jailbreaks and a need for human supervision for intricate, time-consuming tasks. Ian Hogearth, the institute's chairperson, asserts that these evaluations will aid in empirical assessments of model capabilities, reminding us that AI safety remains a nascent and developing field, and that these results only represent a segment of the global evaluation approach AISI is pioneering.

Published At

5/20/2024 1:00:21 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

๐Ÿš€ Algoine is in Public Beta! ๐ŸŒ We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! ๐Ÿ’ก #AlgoineBetaLaunch