AI's Role in Manipulating Voter Sentiment in the 2024 U.S. Elections and Efforts to Counter Disinformation
Summary:
The potential impact of artificial intelligence on manipulating voter sentiment in the 2024 U.S. presidential elections and efforts to counter disinformation on social media are highlighted. China-linked actors are leveraging AI-generated visual media in a campaign targeting politically divisive topics, while AI is also being used to detect and predict disinformation threats. Measures to regulate deep fakes and AI in political ads are being considered, and Google plans to make AI disclosure mandatory for political campaign ads.
The use of artificial intelligence (AI) on social media platforms is being seen as a potential risk to influence voter sentiment in the upcoming 2024 U.S. presidential election. Tech giants and U.S. government entities are actively monitoring the issue of disinformation. A recent report from Microsoft's research unit, the Microsoft Threat Analysis Center (MTAC), revealed that "China-affiliated actors" are utilizing AI-generated visual media in a widespread campaign targeting politically divisive topics and disparaging U.S. political figures and symbols. The report predicts that China will continue to improve this technology over time, raising concerns about its future deployment on a larger scale. However, AI is also being utilized to detect and combat disinformation. Accrete AI, for example, has deployed AI software that provides real-time prediction of disinformation threats on social media, as contracted by the U.S. Special Operations Command (USSOCOM). Accrete's CEO, Prashant Bhuyan, emphasized the significant threat posed by deep fakes and other AI applications on social media, highlighting the unregulated environment where adversaries exploit vulnerabilities and manipulate behavior through the intentional spread of disinformation. In the 2020 U.S. election, troll farms reportedly reached a staggering number of 140 million Americans each month, according to a report from MIT. Troll farms are organized groups of internet trolls aiming to interfere with political opinions and decision-making processes. Accordingly, regulators in the U.S. have been exploring ways to regulate deep fakes before the upcoming election. On August 10, the U.S. Federal Election Commission unanimously voted to advance a petition to regulate political ads using AI, acknowledging deep fakes as a significant threat to democracy. Google has also taken steps to address this issue by announcing on September 7 that it will update its political content policy in mid-November 2023, making AI disclosure mandatory for political campaign ads. The disclosure requirement will apply to content featuring synthetic elements that inaccurately represent real or realistic-looking individuals or events.
Published At
9/8/2023 1:41:08 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.