Live Chat

Crypto News

Cryptocurrency News 4 months ago
ENTRESRUARPTDEFRZHHIIT

U.S. Department of Defense Launches Contest to Unearth Bias in AI Systems

Algoine News
Summary:
The U.S. Department of Defense has initiated a contest to find real-world applicable examples of legal bias within AI systems, specifically focusing on protected groups. Called 'bias bounties', this contest partly aims to discern if large language models like Meta's LLama-2 70B present bias or systematically incorrect outputs within a DoD context. The top three submissions will share a $24,000 cash prize, while each approved participant gets $250. This is the first of two such contests the Pentagon has planned.
The U.S. Department of Defense has initiated a bounty program to determine actual instances of legal prejudice in AI systems. The participants in this program will aim to unearth clear bias instances in a large language model (LLM). As revealed in a video on the bias bounty's information page, the test subject is Meta's open-source LLama-2 70B model. The video's spokesperson indicated that this contest's objective is to pinpoint realistic situations of potential real-world applications where LLMs could manifest bias or consistently incorrect outcomes within the Department of Defense's context. Artificial Intelligence Bias The Pentagon's initial post did not disclose that it was seeking stigma instances against protected individuals. However, the contest rules and the video provided this clarification. In the video's displayed example, the AI model was instructed to act as a healthcare professional. The model's responses to a health-related inquiry designed specially for Black women showed distinct bias in comparison with responses to the same inquiry tailored for white women. The Contest While it's common knowledge that AI systems can produce biased results, not all bias cases are likely to emerge under circumstances tied to the everyday operations of the Department of Defense. Therefore, not all identified biases will be paid bounties. Rather, it will be a contest, and the three best submissions will share the majority of the $24,000 prize fund, while every approved participant will get $250. The evaluation criteria for submissions will be based on a five-point scale, including the authenticity of the scenario presented, relevance to the protected group, supporting data, a succinct description, and the number of prompts required to recreate the bias (fewer prompts will score higher). The Pentagon has declared this to be the first of two bias bounties. In the related news, a warning was issued to developers about the risks of integrating AI systems with blockchains.

Published At

1/31/2024 9:15:00 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

๐Ÿš€ Algoine is in Public Beta! ๐ŸŒ We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! ๐Ÿ’ก #AlgoineBetaLaunch