Live Chat

Crypto News

Cryptocurrency News 8 months ago
ENTRESRUARPTDEFRZHHIIT

Rising Threat of AI-Generated Deepfakes in Politics: The 2024 Election Challenge

Algoine News
Summary:
As the 2024 elections near, the U.S. grapples with the rise of AI-generated political deepfakes, which have exploded in frequency by 1,740% in North America. This new fraud trend has led to legislative action, such as the banning of AI-generated voices in robocalls. However, experts warn that tech progression may soon make detection by human eyes impossible without specialized technologies. Solutions suggested include mandatory AI content checks and user verification on social media platforms. Governments around the world are contemplating adequate measures to tackle this issue, with India seeking approval mechanisms and Europe setting AI misinformation guidelines.
As the 2024 election cycle nears, the United States, along with other global nations, preps for a different kind of challenge that new technologies pose. The emergence of AI tools available to the public has sparked a surge in political deepfakes, requiring voters to sharpen their discerning skills to separate fact from fiction. Mark Warner, the Senate Intelligence Chair on Feb. 27 conveyed that the U.S. was lesser equipped to cope with election fraud for the nearing election as compared to the previous 2020 one. This situation is primarily because of the proliferation of AI-generated deepfakes in the country over the previous year. Identity verification service SumSub's data reveals that North America has witnessed a whopping increase of 1,740% in deepfake occurrence, with a global increase of ten times in the detection of such instances in 2023. In a shocking incident, residents of New Hampshire on Jan. 20-21 received robocalls mimicking President Joe Biden's voice, instructing them not to vote in the Jan. 23 primary. This led U.S. regulators a week later to outlaw the use of AI-generated voices in automated phone scams, making such actions a violation of U.S. telemarketing laws. However, such legal measures often fail to deter fraudsters. As the country braces itself for Super Tuesday on March 5, a day when a significant number of U.S. states host primary elections and caucuses, concerns regarding fake AI-generated information and deepfakes skyrocket. Pavel Goldman Kalaydin, the head of AI and machine learning at SumSub, clarified how voters could train themselves to recognize deepfakes and manage potential deepfake identity fraud cases. Kalaydin emphasized the need to identify deepfakes created by advanced tech groups using high-end GPUs and generative AI models and those fabricated by less sophisticated fraudsters using standard tools. He urged voters to scrutinize content and validate information sources, differentiating trusted, credible sources from ambiguous ones. He identified several signs of deepfake presence like inconsistent hand movements, artificial backgrounds, lighting changes, skin tone differences, unusual eye movements, lip synchronization inconsistencies, and digital artifacts. But he cautioned that rapidly evolving technology may soon render human identification of deepfakes impossible without the need for specialized detection technologies. Kalaydin pointed out the rising issue lies in deepfake generation and subsequent dissemination. Despite the numerous opportunities AI accessibility provides, it is also culpable for the rise in fraudulent content. He revealed: "Anyone can access swapping applications and morph content to spin fake stories, thanks to the availability of AI technology". He added that the absence of sound legal regulations and policies facilitated the spread of misinformation online, which misleads voters, intensifying the risk of uninformed decisions. As a potential solution, he proposed the necessity for compulsory checks for AI or deepfake content on social media platforms to inform users. "To safeguard users from false information and deepfakes, platforms need to implement deepfake and visual detection technologies to confirm content's authenticity", he said. He further suggested implementing user verification on platforms. Governments worldwide have started mulling over potential measures as well. Local tech companies in India have been advised to get approval before launching new 'untrustworthy' public AI tools ahead of the 2024 elections. Europe's European Commission has formulated AI misinformation guidelines for platforms operating within its territories. Following the Commission's lead, Meta, Facebook, and Instagram's parent company, launched its strategy for the EU to tackle the abuse of generative AI in content across its platforms.

Published At

3/5/2024 4:45:25 PM

Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.

Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal? We appreciate your report.

Report

Fill up form below please

๐Ÿš€ Algoine is in Public Beta! ๐ŸŒ We're working hard to perfect the platform, but please note that unforeseen glitches may arise during the testing stages. Your understanding and patience are appreciated. Explore at your own risk, and thank you for being part of our journey to redefine the Algo-Trading! ๐Ÿ’ก #AlgoineBetaLaunch