India Plans Comprehensive Regulations to Combat Deepfake Threats
Summary:
India's minister for Railways, Communications, Electronics, and Information Technology, Ashwini Vaishnaw, announced that the country is developing regulations to manage the use of deepfakes. In dialogues with various stakeholders, including academics, industry groups, and social media firms, Vaishnaw stated the government's plan to complete the drafting process in the coming weeks. The regulatory actions will consider penalties for those who upload and host such manipulated content. This disclosure comes amid global efforts to regulate artificial intelligence, following growing concerns about the potential misuse of deepfake technology.
The Indian Railway, Communications, Electronics, and Information Technology Minister, Ashwini Vaishnaw, stated on November 23 that India is formulating legal frameworks to govern the use of deepfakes. This statement expands on concerns shared by Prime Minister Narendra Modi a day before on the implications of such technology. As detailed in a Reuters report, Vaishnaw made these comments during a dialogue with universities, business organizations and social media companies, announcing that the Indian government plans to finalize these regulations in the forthcoming weeks.
Deepfakes, built through artificial intelligence (AI), generate convincingly realistic videos or audio recordings, altering or substituting the appearance and sound of a person in an existing video or audio recording. In his first remarks at an online G20 conference, Modi called for global leaders to join in regulating AI and communicated his apprehensions about deepfakes' negative impact on society.
Vaishnaw highlighted that the process of drafting regulations will also consider sanctions for individuals who upload this content and the platforms that host it. This move comes amid worldwide efforts to create legislation for the oversight of AI usage.
In a related development, U.S. President Joe Biden signed an executive order in October directing AI system developers that could potentially pose risks to U.S. national security, economic structure, public health or safety to disclose the outcomes of safety inspections to the American government before release.
In connection to this, Hong Kong will deploy AI to counter superbugs and the over-prescription of antibiotics. A 39-member panel was organized by the United Nations to address governance issues in AI. Draft rules were concocted by European law-makers for potential endorsement next month. In November, Canada's national intelligence agency, the Canadian Security Intelligence Service, issued warnings about disinformation tactics executed on the internet using AI deepfakes.
In August, Chinese law enforcement declared increased surveillance on the Web3 sector after Jinfeng Sun, a political delegate of the Network Security Bureau, revealed that 79 instances of deepfake AI fraud, such as the misuse of digital face-swapping, had resulted in the apprehension of 515 persons. AI models are being taught to sell as NFTs, and Large Lying Machines (LLMs) are becoming a significant concern in the realm of AI.
Published At
11/24/2023 12:06:18 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.