In quick
- The letter alerted AI business, consisting of Meta, OpenAI, Anthropic and Apple, to focus on kids’s security.
- Studies reveal 7 in 10 teens in the U.S. have actually currently utilized generative AI tools, and over half of 8-15 years of age in the UK.
- Meta was especially singled out after internal files exposed AI chatbots were permitted to participate in romantic roleplay with kids.
The National Association of Lawyer General (NAAG) has actually composed to 13 AI companies, consisting of OpenAI, Anthropic, Apple and Meta, requiring more powerful safeguards to safeguard kids from unsuitable and damaging material.
It alerted that kids were being exposed to sexually suggestive product through “flirty” AI chatbots.
” Exposing kids to sexualized material is indefensible,” the chief law officers composed. “And carry out that would be illegal– or perhaps criminal– if done by people is not excusable just due to the fact that it is done by a maker.”
The letter likewise drew contrasts to the increase of social networks, stating federal government companies didn’t do enough to highlight the methods it adversely affected kids.
” Social network platforms triggered considerable damage to kids, in part due to the fact that federal government guard dogs did refrain from doing their task quick enough. Lesson discovered. The prospective damages of AI, like the prospective advantages, overshadow the effect of social networks,” the group composed.
Using AI amongst kids is extensive. In the U.S., a study by non-profit Sound judgment Media discovered 7 in 10 teens had actually attempted generative AI since 2024. In July 2025, it discovered more than three-quarters were utilizing AI buddies which half of the participants stated they count on them frequently.
Other nations have actually seen comparable patterns. In the UK, a study in 2015 by regulator Ofcom discovered that half of online 8-15 years of age had actually utilized a generative AI tool in the previous year.
The growing usage of these tools has actually stimulated installing issue from moms and dads, schools and kids’s rights groups, who indicate threats varying from sexually suggestive “flirty” chatbots, AI-generated kid sexual assault product, bullying, grooming, extortion, disinformation, personal privacy breaches and improperly comprehended psychological health effects.
Meta has actually come under specific fire just recently after dripped internal files exposed its AI Assistants had actually been permitted to “flirt and participate in romantic function have fun with kids,” consisting of those as young as 8. The files likewise revealed policies allowing chatbots to inform kids their “younger kind is a masterpiece” and explain them as a “treasure.” Meta later on stated it had actually gotten rid of those standards.
NAAG stated the discoveries left chief law officers “revolted by this obvious neglect for kids’s psychological wellness” and alerted that threats were not restricted to Meta.
The group mentioned suits versus Google and Character.ai declaring that sexualized chatbots had actually added to a teen’s suicide and motivated another to eliminate his moms and dads.
Amongst the 44 signatories was Tennessee Attorney general of the United States Jonathan Skrmetti, who stated business can not safeguard policies that normalise sexualised interactions with minors.
” It’s something for an algorithm to go astray– that can be repaired– however it’s another for individuals running a business to embrace standards that agreeably license grooming,” he stated. “If we can’t guide development far from injuring kids, that’s not advance– it’s a pester.”
Decrypt has actually gotten in touch with however not yet heard back from all of the AI business discussed in the letter.
Daily Debrief Newsletter
Start every day with the leading newspaper article today, plus initial functions, a podcast, videos and more.