Microsoft's AI Chatbot Copilot Implicated in Spreading Misleading Political Information
Summary:
A report by European nonprofits AI Forensics and AlgorithmWatch suggests Microsoft's AI chatbot, Copilot, inaccurately disseminates and misquotes election information. The bot wrongly responded to 30% of basic political election questions in a study, possibly leading to misinformation. The inaccuracies aren't limited to Copilot; preliminary tests revealed similar findings in Chat-GPT4. While these haven't influenced results yet, they can affect public understanding and trust. The AI's safeguards also inconsistently dodge questions. Microsoft aims to resolve issues ahead of the 2024 U.S. elections.
Research conducted by two nonprofit groups based in Europe suggests that Microsoft's rebranded AI chatbot, Copilot, previously known as Bing, generates erroneous findings on electoral information and incorrectly cites its sources. AI Forensics and AlgorithmWatch released the study on December 15, indicating that 30% of Bing's AI chatbot's responses to rudimentary questions on political elections in Germany and Switzerland were incorrect. These errors in responses touched on various subjects such as candidates, opinion polls, controversies, and voting processes. Questions about America's 2024 presidential elections also received flawed responses.
The researchers chose Bing's AI chatbot for their study as it was among the pioneer AI chatbots to embed sources within its responses, adding that these inaccuracies were not exclusive to Bing. Preliminary tests on Chat-GPT4 also revealed inconsistencies, they reported.
The charitable organizations emphasized that the erroneous information has yet to affect elections' results, but could potentially spawn public misunderstanding and disinformation. The statement warned, “The growing prevalence of generative AI may threaten a fundamental pillar of democracy: the provision of trusted and transparent public information.” The research also discovered that the inbuilt safeguards in the AI chatbot were not uniformly effective, prompting it to dodge questions 40% of the time.
In response to a Wall Street Journal piece on the report, Microsoft pledged to rectify the issues before the United States 2024 elections. A spokesperson for Microsoft advised users to ensure the accuracy of information gained from AI chatbots at all times.
In October earlier this year, US senators tabled a bill that would penalize creators of unauthorised artificial duplicates of real persons — alive or deceased. Meta, Facebook and Instagram's parent company, issued a directive in November that prohibits the use of generative AI tools for ad creation by political advertisers as a safeguard against future elections.
Published At
12/15/2023 5:57:01 PM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.