Live Chat

Crypto News

Cryptocurrency News 1 years ago
ENTRESRUARPTDEFRZHHIIT

NIST Seeks Public Input on Secure and Ethical Development of AI Systems

Algoine News
Summary:
The National Institute of Standards and Technology (NIST) has issued a Request for Information (RFI) to gather public input on the secure development and application of artificial intelligence (AI). Under the direction of President Biden's Executive Order, NIST is seeking insight on risk management of generative AI and ways to diminish AI-generated misinformation. The institute is also probing effective areas for red-teaming - a strategy used to identify system vulnerabilities. The request is part of a broader initiative by NIST to support the AI community for responsibly and reliably developing AI systems, escalating the human-centric approach towards AI safety and governance.
The U.S. Department of Commerce's National Institute of Standards and Technology (NIST) has issued a Request for Information (RFI) as part of its duties under the latest Executive Order on the secure and ethical development and implementation of artificial intelligence (AI). The institute has opened a public input process until February 2 to gather crucial insights to be used for testing the safety of AI systems. Secretary of Commerce Gina Raimondo mentioned that President Joe Biden's October executive directive, which instructs NIST to develop guidelines, encourage consensual standards and build testing environments for AI evaluation, motivates this initiative. This blueprint aims to aid the AI sector in the safe, reliable, and ethical development of AI. The request for information from NIST invites insights from AI firms and the public concerning the risk management of generative AI and the reduction of AI-induced misinformation risks. Generative AI, which can create text, images, and videos based on ambiguous prompts, has sparked both excitement and worries. There are fears over job losses, disturbances in elections, and the potential of the technology exceeding human competence, which might yield disastrous outcomes. The RFI also seeks information on the optimal areas for executing red-teaming during AI risk examination and establishing best practices. The term “red-teaming,” derived from Cold War simulations, is used to refer to a method where a group, known as the “red team,” mimics conceivable adversarial situations or attacks to uncover the weak points and vulnerabilities of a system, process, or organization. This technique has long been used in cybersecurity to detect new threats. A related note is that the first-ever public red-teaming event in the U.S. took place in August at a cybersecurity conference arranged by AI Village, SeedAI, and Humane Intelligence. The announcement of a new AI consortium by NIST in November, coupled with an official notice calling for applications with relevant qualifications, was an important milestone. The aim of this consortium is to develop and execute specific policies and measures to make sure U.S. policymakers opt for a human-centric approach to AI safety and administration.

Published At

12/20/2023 11:46:51 AM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

🚀 Algoine is in Public Beta! 🌐 We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! 💡 #AlgoineBetaLaunch