White House Unveils Comprehensive AI Policy to Manage Risks and Enhance Reporting
Summary:
The White House has introduced its first comprehensive AI policy which mandates that federal agencies must report on AI use and address its potential risks within 60 days. This involves appointing a Chief AI Officer and integrating protective measures for the public. Some Department of Defense uses will be exempt from this disclosure. The Office of Management and Budget seeks to guide the safe and effective use of AI, while stressing the need for reliable AI systems. Government contractors are encouraged to comply with best practices. The administration also plans to hire 100 AI experts by summer.
The first all-encompassing policy concerning the regulation and risk management of artificial intelligence (AI) has been launched by the White House. This demands that government agencies enhance their reporting on its use and simultaneously address the potential threats that the technology could pose. Issued on March 28, a memorandum from the White House orders federal offices to nominate a chief AI officer, disclose AI operations and deploy safeguards within the following 60 days. This instruction is in line with the AI executive order signed by President Biden in October 2023. Vice President Kamala Harris, in a press teleconference, emphasized that leaders from all sectors, including government, civil society, and industry, bear a responsibility to ensure the deployment and development of AI is conducted in a way that avoids public harm while maximising its benefits. The newly created rule, an undertaking of the Office of Management and Budget, is conceived to instruct the entire federal government on how to safely and efficiently use AI in light of its swift growth. Even though the government is eager to utilize AI's advantages, the Biden administration is wary of the emerging hazards. As implied in the memo, some cases of AI use, especially within the Department of Defense, will be exempt from mandatory inventory disclosure as this would oppose prevailing laws and governmental policies. Before December 1, agencies are required to devise specific protections for AI uses that might impact American citizens' safety or rights. For example, travelers should be able to quickly refuse TSA facial recognition at airports. Agencies that fail to implement these safety features must stop using the AI system unless the agency's management can explain why this would increase safety or rights risks or impede critical agency functions. The Office of Management and Budget's current AI rules coincide with the Biden administration's AI Bill of Rights from October 2022 and the National Institute of Standards and Technology's AI Risk Management Framework from January 2023. The aim of these initiatives is to stress the necessity of crafting trustworthy AI systems. The OMB is also inviting suggestions on how to enforce adherence to best practices among government contractors providing technology, and is planning to secure congruence between agencies' AI contracts and its policy later in the year. Furthermore, the administration disclosed its plan to hire 100 AI experts for government roles by summer, in keeping with the “talent surge” provision of the October executive order.
Published At
3/28/2024 3:12:01 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.