Live Chat

Crypto News

Cryptocurrency News 9 months ago
ENTRESRUARPTDEFRZHHIIT

Controlling Harmful AI: A Call for Enhanced Government-Backed Systems and Compute Governance

Algoine News
Summary:
Researchers suggest that controlling harmful artificial intelligence (AI) may require a continuous enhancement of powerful government-backed AI systems. Their study proposes a need to control access to vital AI resources, necessitating the implementation of 'compute governance'. Effective restriction of malicious use of AI could require including 'kill switches' in hardware, enabling remote termination of illegal operations. They also warn about possible complications in controlling decentralized compute. The researchers suggest that societies may need to employ more powerful, governable compute wisely to counter emerging risks from ungovernable compute systems.
The control of harmful artificial intelligence (AI) may lie in the continuous enhancement of AI under governmental guidance. This inference is drawn by a group of researchers in their latest paper titled “Computing power then governance of artificial intelligence.” The study, involving intellectuals from institutes like OpenAI, Cambridge, Oxford, and many others, was an effort to explore the immediate and potential issues relating to the legislation and progression of AI. The principal contention of the paper highlights the potential need for controlling access to pivotal AI systems in the future by controlling the hardware needed to operate and model them. As explained by the researchers, policymakers can leverage computing power to implement beneficial outcomes, curb irresponsible AI development and usage, and provide regulatory visibility. Here, "compute" refers to the vital hardware needed for AI development, including GPUs and CPUs. The researchers recommend severing access to these sources as the optimal way to avert potential AI related harm. This implies governmental measures to oversee the creation, sale, and function of any hardware crucial for advanced AI development. In certain aspects, "compute governance" is already being practiced globally. For example, the U.S. restricts the sale of specific GPU models, commonly used in AI training, to countries like China. As per the research, effectively restricting malevolent use of AI requires manufacturers to include "kill switches" in the hardware. Such a measure would enable governments to terminate illegal AI training efforts remotely. Despite that, the scholars caution that such governance measures come with inherent risks related to privacy, economic implications, and concentration of power. For instance, hardware monitoring in the U.S. could breach the recent AI bill of rights guideline by the White House that assures data protection rights to citizens. Adding to these concerns, the researchers also indicate that emerging advances in "communication-efficient" training could steer towards decentralized compute for modeling, building, and operating. This trend could complicate the government's task of identifying and terminating illegal training apparatus. The researchers suggest that this possibility might leave governments no other option but adapting an aggressive strategy against the unlawful use of AI. As per their judgment, "Society will have to use more powerful, governable compute timely and wisely to develop defenses against emerging risks posed by ungovernable compute.

Published At

2/16/2024 10:12:59 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

🚀 Algoine is in Public Beta! 🌐 We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! 💡 #AlgoineBetaLaunch