Google Limits Election-Related Queries on Gemini Chatbot Amid Rising AI Misinformation Concerns
Summary:
Google has announced plans to limit election-focused queries that its Gemini chatbot can respond to, with changes already implemented in the U.S. and India. The move comes in response to concerns over possible misinformation and fake news generated through AI, particularly in image and video content. Countries like South Africa and India now require tech companies to obtain government approval before releasing potentially unreliable AI tools. The European Commission and Meta have also taken measures to combat misuse of generative AI on their platforms.
Google has declared plans to limit the range of election-focused inquiries that its Gemini chatbot can respond to. The changes have already been implemented in the United States and India, anticipating their upcoming elections. The company, an Alphabet subsidiary, made this assertion on Tuesday, March 12, aiming to circumvent potential mishaps during the technology's utilization. Last month, Google withdrew its AI-powered image creation tool due to scandals linked to historical inaccuracies and controversial feedback. This tool was launched earlier in February via Gemini, Google's collection of AI models, coinciding with a rebrand. The progression in generative AI has heightened fears over disinformation and fabricated news, particularly in the realms of image and video creation. This has prompted governments to contemplate imposing regulations on such technology. Google stated in a blog post that these limitations on election-specific queries responded to by Gemini were instated with a cautious approach to an important matter. The company takes seriously its role in disseminating high-quality information for these queries and is ceaselessly striving to enhance its safeguards. Nations such as South Africa and India, both preparing for national elections, have required tech firms to secure government consent prior to public release of AI tools that might be dubious or in a testing phase. These tools should be properly labeled to indicate the likelihood of incorrect results. The emergence of publicly available AI tools has prompted an uptick in political deepfakes, necessitating voters to develop new abilities to identify authenticity. On Feb. 27, U.S. Senate Intelligence Committee Chair Mark Warner commented that the U.S. is less equipped to deal with election fraud for the forthcoming 2024 election as compared to the prior one in 2020. In Europe, the European Commission has devised AI disinformation guidelines for regional platforms, and shortly afterward, Meta, Facebook and Instagram's parent company, shared its own plan for the European Union to tackle generative AI misuse on its platforms.
Published At
3/13/2024 11:11:54 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.