Indian Government Demands Pre-approval for New AI Services Amid Election Security Concerns
Summary:
The Indian Government has instructed tech companies to secure governmental approval before launching new AI services, particularly those deemed "unreliable" or in trial stages. Released by the Indian IT ministry, the advisory highlights the importance of user safety and lawful adherence, amid concerns over potential impacts on the imminent general elections. The move follows recent critiques of Google's AI tool Gemini for its possibly biased responses, prompting an emphasis on platforms' legal obligations to foster trust and security. Despite some resistance from the tech industry, the government stresses India's commitment to AI advancement and digital ecosystem growth.
The Indian Government has advised technology companies developing emergent artificial intelligence (AI) services to obtain governmental approval before their release. The advisory, issued by the Indian IT ministry on 1st March, stated that such permission is obligatory before introducing AI services that could be “unreliable” or still at trial stage, and they should carry warnings to possible users about potential inaccuracies in responses. Furthermore, it is specified that Indian user access to such services has to be explicitly sanctioned by the Indian Government. The advisory also instructed platforms to ensure their tools won't compromise the integrity of electoral processes, which is significant in light of the upcoming general elections.
This initiative stems from recent incidents where senior Indian politicians criticized Google and its AI product Gemini for potentially biased or erroneous responses, such as labelling Prime Minister Narendra Modi as a fascist. Google recognized Gemini was not always reliable, especially on current affairs topics, and issued an apology. In response, Deputy IT Minister Rajeev Chandrasekhar stressed platforms have a lawful obligation to foster safety and trust, and 'unreliable' doesn't excuse them from that responsibility.
New regulations combating the distribution of AI-created deepfakes were announced in November in anticipation of the imminent elections, similar to steps taken by US regulators. Despite resistance from the tech industry towards the recent AI advisory, considering India's status as a technology leader, Chandrasekhar stated that platforms supporting or generating illegal content should face 'legal consequences'. He reiterated India's passion for AI and its commitment to broadening its digital and innovation ecosystem, and stated that the advisory was meant to notify those launching not fully tested AI platforms onto the public internet to comply with Indian laws and maintain user safety.
On another note, India AI startup Sarvam partnered with Microsoft on February 8th to introduce an Indic voice large language model (LLM) to its Azure AI system to cater to the wider Indian populace.
Published At
3/4/2024 1:41:53 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.