OpenAI and Microsoft Partner to Thwart State-Affiliated Cyberattacks Exploiting AI Tools
Summary:
OpenAI and Microsoft collaborated to thwart five state-affiliated cyberattacks that exploited large language models, including OpenAI's GPT-4. The attacks, linked to China, Iran, North Korea and Russia, aimed to utilize AI for various cyber activities, prompting OpenAI to reinforce its defense measures. Despite ongoing efforts by OpenAI and others to address AI cybersecurity, threats persist, leading over 200 organizations to join or form initiatives focussed on promoting AI safety and addressing cybersecurity issues.
OpenAI, the organization behind the popular ChatGPT software, has teamed up with primary investor Microsoft to successfully ward off five government-backed cyberattacks.Linked to a selection of foreign powers, including Russian military intelligence, Iran's Revolutionary Guard, as well as the Chinese and North Korean governments, the hackers aimed to hone their skills using large language models (LLMs), sophisticated systems based on AI that can generate impressively human-like responses through the analysis of extensive textual data.
OpenAI identified the cyber intrusions as originating from two Chinese groups known as Charcoal Typhoon and Salmon Typhoon. Other infiltration attempts were associated with Iran through Crimson Sandstorm, North Korea via Emerald Sleet, and Russia through Forest Blizzard. According to OpenAI, the groups sought to use GPT-4 for various tasks ranging from corporate and cybersecurity tool research, script generation, code debugging, orchestrating phishing campaigns, evading malware detection, translating complex technical papers and exploring satellite communication and radar technology. The detection of these activities led to the suspension of the accounts involved.
This discovery came as OpenAI was moving to impose a comprehensive ban on the use of AI products by state-sponsored hacking networks. Although successful in thwarting these specific attempts, the company recognized the ongoing difficulty of forestalling every potential misuse.
In the aftermath of an increased occurrence of AI-generated deepfakes and frauds following the introduction of ChatGPT, lawmakers have heightened their surveillance of generative AI developers. To support advancements in the field of AI cybersecurity, OpenAI announced a $1 million grant program in June 2023 which aims to strengthen and quantify the influence of AI-based cybersecurity solutions.
Despite these defensive measures and the continuous effort to avoid the generation of harmful or unsuitable content by ChatGPT, hackers have still found ways to outmaneuver these barriers, manipulating the chatbot to generate such content.
Over 200 organizations, among them OpenAI, Microsoft, Anthropic, and Google, recently partnered with the Biden Administration to form the AI Safety Institute and the U.S. AI Safety Institute Consortium (AISIC). The collective endeavor is designed to foster the safe evolution of artificial intelligence, counter AI-produced deepfakes, and attend to cybersecurity challenges. This initiative comes in the wake of the launch of the U.S. AI Safety Institute (USAISI), created following President Joe Biden's late October 2023 executive order focusing on AI safety.
Published At
2/15/2024 11:46:18 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.