Meta Faces Multistate Lawsuit Over Manipulation of Minors; Alarming Rise in AI-Generated Abuse Material
Summary:
Meta, the parent company of Facebook and Instagram, faces a lawsuit from 34 U.S. states over improper manipulation of underage users on its platforms. The allegations center around Meta's algorithms fostering addictive behavior and negatively affecting minors’ mental wellbeing. The lawsuit emerges amidst rapid advancements in artificial intelligence (AI), which Meta has been using to address safety and trust issues. Back in the UK, the Internet Watch Foundation (IWF) has raised concerns about the rise in AI-generated child sexual abuse material and called for global measures to curb this disturbing trend.
Meta, the owner of social media giants Facebook and Instagram, is facing a lawsuit from 34 U.S. states. The charge alleges the company engages in unethical manipulation of underage users on its platform. It appears this legal action has emerged amidst swift strides in artificial intelligence (AI), particularly in areas like text and generative AI.
States including, but not limited to, California, New York, Ohio, South Dakota, Virginia, and Louisiana have accused Meta of exploiting its algorithms to encourage addictive tendencies and detrimentally affecting the mental health of young users, through features such as the "Like" button.
Despite Meta's Chief AI Scientist recently addressing concerns about the potential risks linked to AI technology, the company has utilized AI to enhance safety and trust within its platforms. But the legal challenge from various states continues unabated.
The filing image source on the CourtListener site has revealed that the states' legal representatives are pursuing different amounts of financial recompense, varying from $5,000 to $25,000 per alleged violation. Until the time of this write-up, despite reaching out to Meta, Cointelegraph has not received any response from the company.
Simultaneously, the UK-based Internet Watch Foundation (IWF) has issued a warning regarding the exponential growth of AI-generated child sexual abuse material (CSAM). The IWF recently discovered 20,254 CSAM images produced by AI on a single dark web platform within a mere 30 days. The group warns that such a disturbing influx of content threatens to flood the internet.
To tackle the rise in CSAM, the UK organization is calling for an international unified approach. They advise a combination of revising existing legislation, improving the training of law enforcement agencies, and implementing regulatory oversight for AI models.
In a bid to limit AI developers' contribution to child abuse content generation, the IWF proposes a ban on AI-generated harmful content, the exclusion of associated models, and an emphasis on eradicating such material from AI models.
With the progression of generative AI image generators, the reproduction of convincing human likenesses has markedly improved. Platforms such as Midjourney, Runway, Stable Diffusion, and OpenAI's Dall-E, which generate highly realistic images, demonstrate this advancement.
Published At
10/26/2023 10:11:58 AM
Disclaimer: Algoine does not endorse any content or product on this page. Readers should conduct their own research before taking any actions related to the asset, company, or any information in this article and assume full responsibility for their decisions. This article should not be considered as investment advice. Our news is prepared with AI support.
Do you suspect this content may be misleading, incomplete, or inappropriate in any way, requiring modification or removal?
We appreciate your report.