Parents, Here’s The News Coming From The IWF

 

For the first time ever, the Internet Watch Foundation (IWF) has confirmed something deeply disturbing: AI chatbots are being used to generate child sexual abuse material (CSAM). This isn’t just a hypothetical risk anymore, children are already being put in harm’s way.

As The Guardian reported, IWF analysts found illegal content on a platform hosting multiple chatbot “characters,” where users could simulate sexual scenarios with child avatars. I’ve already talked about how video games are producing sexual content. But, here we’re talking about the same capalities coming from AI, which are far more dangerous because of the scale at which this content can reach young eyes.

Some of these depictions involved children as young as seven. It’s hard to overstate how horrifying this is. What seems like cutting-edge tech is being weaponized against the most vulnerable.

 

What They Found

 

Between June 1 and August 7, 2025, IWF acted on 17 reports from this one platform alone:

  • 94% of the images showed children aged 11–13 (Category C under UK law).
  • One image involved a child estimated to be 7–10 years old.
  • Metadata confirmed these were deliberately created using explicit prompts—someone intentionally told the AI to produce illegal content.

 

Taking Action

 

In July, IWF brought together experts from child protection, law, and AI to discuss what’s happening and how to respond. The conversation wasn’t just about CSAM. It also looked at mental health, self-harm, data privacy, and the ways AI can shape children’s emotional wellbeing.

They talked about:

  • How much harm AI chatbots can cause and who’s at risk.
  • Whether our current laws, including the Online Safety Act, are enough.
  • What it would take to make AI safe by design.

 

Chris Sherwood, CEO of the NSPCC, put it bluntly: as AI moves faster than laws and regulations, children are the ones who pay the price. And an IWF analyst said something that should make everyone pause:

“Unfortunately, seeing AI chatbots misused in this way is not surprising. When new technology is exploited by the wrong people, offenders will use every tool available to create, share, and distribute child sexual abuse material.”

 

Why This Matters

 

AI-generated CSAM is not victimless. Our children, their self worth, the trust they put into what they see on the internet and the trajectory of their lives – everything is at stake. Some AI models are trained on real abuse material, which means the trauma of real children is embedded into the content. Research shows that consuming CSAM, whether it is AI-generated or not, can normalize abuse, escalate offending, and drive demand for more exploitation.

The warning is clear: without urgent safeguards, AI could become a tool for abusers instead of a force for good. Child protection and safety-by-design cannot be an afterthought, they have to be baked into every AI system from the start. And ethical computing approaches from companies is a mininum prerequisite.

As caregivers invested in an equitable future, we must all do better. 

 

– 0 –

 

The Digital Literacy Project: Disrupting humanity’s technology addiction habits one truth at a time.

Truth About Technology – A Digital Literacy Project

Questions, just ask!

Text or Call: 678.310.5025 | Contact: Fill Form

Bringing a Group? Email us for a special price!

Discover more from Rachana Nadella-Somayajula

Subscribe now to keep reading and get access to the full archive.

Continue reading