18 Nations Unveil Cybersecurity Guidelines for AI, Ignite Industry Discourse
Summary:
Eighteen countries, including the U.S., U.K., and Australia, have issued global guidelines focused on securing AI models against tampering. The recommendations guide AI companies to prioritize cybersecurity in their operations. However, the document doesn't address contentious topics like the regulation of image-generating models and deep fakes or data collection methods. The guidelines arrive amid a series of government initiatives tackling AI regulations, but have also seen resistance from the AI industry over fears that they may hinder innovation.
Global guidelines intended to protect AI models from manipulation have been jointly issued by the United States, the United Kingdom, Australia, among 15 other nations. With an emphasis on the security of AI models, these countries proposed on 26th November that AI organisations should prioritize cybersecurity as a fundamental aspect of their operations.
As pointed out by these nations, security isn't always considered a priority in this rapidly evolving industry. They offered a series of general guidelines that include a tightly monitored AI infrastructure, vigilance for any interference with models both pre- and post-release, and the importance of proper cybersecurity training for employees.
Interestingly, the document did not address some hotly debated AI topics such as the regulation of image-creating models and deep fakes or the methods employed for data collection and its use for training models, the latter being a controversial issue that has landed several AI companies in legal troubles over copyright infringement allegations.
Artificial intelligence is at a critical juncture, possibly being the most far-reaching technology of our era, emphasized Alejandro Mayorkas, the U.S. Secretary of Homeland Security. He stressed that cybersecurity is the linchpin for creating AI systems that are safe, reliable, and trustworthy.
The guidelines are the latest in government endeavors to facilitate AI regulations, as governments and AI organizations convened earlier this month at an AI Safety Summit in London to discuss an aligned approach to AI development. In parallel, the European Union is working on its AI Act to regulate this technology area and the U.S. President Joe Biden recently sanctioned an executive order stipulating AI safety measures.
However, both these initiatives have faced resistance from the AI sector citing potential impediments to innovation. The newly released guidelines, advocating for a "secure by design" approach, are endorsed by nations including Canada, France, Germany, Israel, Italy, Japan, New Zealand, Nigeria, Norway, South Korea, Singapore and AI companies like OpenAI, Microsoft, Google, Anthropic, and Scale AI, who had a part in formulating these guidelines.
Published At
11/27/2023 4:45:38 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.