US States Sue Facebook for Child Manipulation Amid AI Breakthroughs
- The action coincides with the ongoing breakthroughs in generative AI.
- The States demand damages up to $25,000 per incident in the ongoing legal battle.
- 34 U.S. states took legal action against Meta for manipulating minors via Facebook and Instagram.
A coalition of 4 U.S. states has initiated legal action against Meta, the parent company of Facebook and Instagram. They allege that the social media giant has been involved in the inappropriate manipulation of minors or young Americans who are active on Facebook and Instagram.
This legal action coincides with the ongoing breakthroughs in artificial intelligence (AI), particularly in text and generative AI. Attorneys general from multiple states, such as California, Ohio, New York, Kentucky, Virginia, and Louisiana, accuse Meta of employing its algorithms to promote addictive behaviors. They contend that such behaviors detrimentally affect the mental health of children. The statement read in part:
Meta has repeatedly misled the public about the substantial dangers of its Social Media Platforms. It has concealed the ways in which these Platforms exploit and manipulate its most vulnerable consumers: teenagers and children.
The various states’ attorneys pursue distinct claims for damages, restitution, and compensation. These civil penalties encompass amounts ranging between $5,000 and $25,000 “per willful violation.”
A screenshot of the court filing against Meta at the U.S. District Court of Nothern California.(Source: Deadline)
Notably, these U.S. state governments are pressing forward with legal action despite a recent statement by Yann LeCun Meta’s chief AI scientist, who claimed that concerns about the existential risks of AI remain “premature.” LeCun asserted that Meta has already utilized AI to tackle trust and safety concerns on its platforms.
Meanwhile, the Internet Watch Foundation (IWF), based in the United Kingdom, has expressed serious concerns regarding the rapid increase in AI-generated child sexual abuse material (CSAM). In a recent report, the IWF disclosed that they had identified over 20,254 AI-generated CSAM images on a single dark web forum within just one month. They cautioned that this sharp uptick in disturbing content could potentially inundate the internet.
Disclaimer: The information presented in this article is for informational and educational purposes only. The article does not constitute financial advice or advice of any kind. Coin Edition is not responsible for any losses incurred as a result of the utilization of content, products, or services mentioned. Readers are advised to exercise caution before taking any action related to the company.
Comments
Post a Comment