FTC to Tighten Rules against AI Deepfake Impersonations for Consumer Protection
Summary:
The Federal Trade Commission (FTC) is planning to enhance a current policy to combat deepfakes, aiming to make it illegal for artificial intelligence to imitate businesses or government entities. The proposed regulation would prohibit platforms from providing goods or services that could harm consumers through impersonation. FTC Chair, Lina Khan emphasized the need for this update to protect against rising AI-related scams. The revised rule allows the FTC to take legal action against fraudsters impersonating government or businesses, in an effort to protect consumers and retrieve illicitly gained funds.
In response to the growing threat of deepfakes, the Federal Trade Commission (FTC) is aiming to enhance an existing policy that makes it illegal for artificial intelligence (AI) to imitate businesses or government entities, thereby ensuring consumer protection. The revision of this rule, pending official wording and public input received by the FTC, could result in generative artificial intelligence (GenAI) platforms being banned from providing goods or services that they are aware could be utilized to defraud consumers using imitations.
FTC Chair, Lina Khan, highlighted in a media statement, "Given the rise of AI-driven scams like voice cloning, itβs crucial now more than ever to protect Americans against fraudulent impersonators. The proposed enhancements to our impersonation rule are aimed at doing exactly that, fortifying the FTC's approach to managing AI-related scams involving identity theft."
The amended Government and Business Impersonation Rule enables the FTC to autonomously file federal court proceedings requiring con artists to repay funds secured while using counterfeit government or business profiles. The statistics are chilling β in the previous year alone, Americans were conned out of $2.7 billion in scams involving perpetrators masquerading as the government.
The revised rule on business and government impersonation will come into effect 30 days post-publication in the Federal Register. The public will have 60 days from its Federal Register publication date to submit comments on the Supplemental Notice of Proposed Rulemaking (SNPRM).
Deepfakes involve the use of AI to tamper with videos by modifying an individual's face or body. Although no national legislations address the creation or distribution of deepfake visuals, lawmakers are moving towards tackling the issue. Celebrities and private individuals victimized by deepfakes could potentially resort to recognized legal routes such as copyright laws, rights associated with their image, and various civil wrongs to seek redress. However, these legal processes can be drawn out and strenuous.
On January 31, the FCC implemented a ban on robocalls generated via artificial intelligence by reinterpreting a regulation that prohibits spam calls made by automated or pre-recorded voices. The ban was enforced following a telephonic campaign in New Hampshire that utilized a deepfake of President Biden to deter voters. Absent congressional action, individual states nationwide have enacted laws declaring deepfakes unlawful.
Published At
2/16/2024 11:08:12 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.