European Commission Seeks AI Risk Management Details from Major Online Platforms Amid Election Security Concerns
Summary:
The European Commission has formally requested details from major online platforms like Bing, Facebook, Google Search, Instagram, Snapchat, TikTok, YouTube, and X regarding their management of potential generative artificial intelligence (AI) risks that could mislead voters. This action aligns with the Digital Services Act (DSA) regulations aimed at mitigating systemic risks. Election security is the main focus, with guidelines to be finalized by March 27. The falling costs of producing synthetic content, leading to an increased risk of misleading political deepfakes during elections, is a key concern. The requests, due by April 3, could result in fines or penalties for inaccuracies or failure to respond.
The European Commission has formally solicited details from various platforms, including Bing, Facebook, Google Search, Instagram, SnapChat, TikTok, YouTube, and X about their strategies to manage potential risks tied to the misuse of generative artificial intelligence (AI) that could deceive voters. On March 14, the Commission declared that it is seeking more information about safeguards these platforms have put in place to mitigate dangers linked to generative AI, such as 'hallucinations,' the widespread spread of deceptive deepfakes, and automatic service manipulation that could alter voter perspectives.
These inquiries, issued under the Digital Services Act (DSA), which updates the EU's regulations on e-commerce and digital governance, apply to the eight platforms as they are identified as very large online platforms (VLOPs) required to assess and mitigate systemic risks in accordance with the Act’s provisions.
“The Commission seeks knowledge and internal documents about risk evaluation and mitigation measures related to the effect of generative AI on electoral processes, spread of illegal content, safeguarding fundamental rights, gender-based violence, child protection, and mental health," the Commission noted, underlining the inquiries relate to both the production and spread of generative AI material.
The EU, tasked with ensuring VLOPs adhere to the specific regulations of the DSA concerning Big Tech, has highlighted election safety as a primary enforcement priority. Recently, it has been seeking feedback on election security protocols for VLOPs, while concurrently creating official guidance in this domain.
The intention behind these inquiries, as stated by the Commission, is to contribute to the development of such guidance. While these platforms have until April 3 to provide data pertaining to election security - a request deemed 'urgent', the EU aspires to conclude the election protection guidelines by March 27.
The Commission underlined that synthetic content production costs are significantly falling, thereby escalating the risk of misleading political deepfakes being shared extensively during elections. This led to the Commission heightening its focus on large platforms that can broadly distribute such deepfakes.
Per Article 74(2) of the DSA, the Commission reserves the right to impose penalties for providing inaccurate, incomplete, or false information in response to these inquiries. Failure to respond by VLOPs and VLOSEs could potentially invite periodic penalties.
Notably, the European Commission's solicitation of information is amidst a consensus within the tech industry to address deceptive AI usage during elections, which emerged from the Munich Security Conference in February and garnered support from several platforms that have now received RFIs from the Commission.
Published At
3/15/2024 12:58:19 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.