Austrian Data Rights Group Accuses OpenAI of Breaching EU Privacy Regulations
Summary:
Austria-based data rights group Noyb has lodged a privacy claim against artificial intelligence developer OpenAI. The claim alleges OpenAI's chatbot, ChatGPT, failed to correct inaccurate personal information, breaching EU privacy regulations. The company also allegedly declined to disclose the source of its data training. Noyb, known as the European Center for Digital Rights, urges alignment of AI technology with existing legal criteria and seeks an investigation of OpenAI's data processing methods. The incident adds to a series of claims against AI chatbots, with both Microsoft’s Bing AI chatbot and Google's Gemini AI chatbot previously providing misleading information.
A data rights group based in Austria has lodged a privacy claim against renowned artificial intelligence firm, OpenAI. As per the allegations made by Noyb on April 29, the AI developer has failed to rectify incorrect data provided by its AI chatbot, ChatGPT, which is potentially a violation of EU's privacy regulations. Reportedly, an anonymous figure of public interest asked for information about himself from OpenAI’s chatbot but continually received inaccurate data. OpenAI allegedly declined to rectify or erase the false data stating that it isn't feasible and also refused to disclose the source or nature of its training data.
Expressing her thoughts on the matter, Noyb’s data protection legal representative, Maartje de Graaf, stated that generating data about individuals using a system that cannot ensure accuracy or transparency stands against law. She further stressed that it's the technology's responsibility to align itself with legal criteria.
It's also worth noting that Noyb commonly recognized as the European Center for Digital Rights, is founded with the intention to enforce the EU's General Data Protection Regulation (GDPR) laws through strategic court cases and media initiatives.
Further, the report mentions that the Austrian data protection authority has received a request from Noyb, seeking an investigation of OpenAI’s data processing methods and its assurance of data accuracy processed by its language models. De Graaf made it clear that under current circumstances, companies like OpenAI are failing to align their AI chatbots to comply with EU law on data processing of individuals.
Incidents of chatbots being targeted by researchers or activists aren't new in Europe. In December 2023, two European nonprofits disclosed that Microsoft’s Bing AI chatbot, now known as Copilot, gave misleading information about local elections in Germany and Switzerland. The chatbot provided inaccurate answers on candidate information, polls, scandal, voting, and even misquoted its sources. Google's Gemini AI chatbot also supplied "woke" and false imagery via its image generator in a situation not directly related to EU. Google expressed regret for the incident and promised to revise its model.
Published At
4/29/2024 10:36:05 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.