AI-driven abuse videos on the rise, says IWF

AI-driven abuse videos on the rise, says IWF
Credit: Dominic Lipinski/PA Media

UK (Parliament Politics Magazine) – The IWF found 1,286 AI-made child abuse videos in early 2025, up from two last year, warning of rapid tech misuse and risks of internet saturation.

As reported by The Guardian, AI-fuelled child abuse content is increasing online, as offenders exploit rapid advances in AI to create disturbingly realistic videos.

What did the Internet Watch Foundation reveal about AI child abuse videos?

The Internet Watch Foundation said AI-generated abuse content has reached near-realism and is spreading rapidly online.

The watchdog confirmed 1,286 illegal AI-generated child abuse videos in the first half of 2025, compared to just two during the same period last year.

According to the IWF, over 1,000 AI-generated clips contained category A abuse, the most extreme and graphic form of illegal content.

They warned that paedophiles are exploiting widely accessible AI video tools, made possible by a multibillion-dollar wave of tech investment.

The organisation reported a 400% increase in URLs containing AI-generated child abuse material during early 2025. It received 210 reports, compared to just 42 in 2024, with many sites hosting hundreds of images and a rising number of videos.

The IWF spotted a dark web post in which an offender described how quickly AI tools evolve, saying that once they learned one, a better one soon appeared.

According to the watchdog, this year’s most realistic AI-generated abuse content was created using images of actual victims.

What did the IWF analysts say about AI abuse tools?

One IWF analyst said,

“It is a very competitive industry. Lots of money is going into it, so unfortunately, there is a lot of choice for perpetrators.”

The watchdog’s experts said basic AI models are being enhanced with child sexual abuse material, allowing the creation of highly realistic abuse videos.

According to the IWF, some AI tools were trained using only a handful of CSAM clips, yet still produced highly realistic synthetic abuse content.

What did Derek Ray-Hill say about the AI and CSAM surge?

Derek Ray-Hill, interim head of the IWF, warned that the rapid development and broad access to AI tools could spark a surge in AI-generated child abuse content online.

He said,

“There is an incredible risk of AI-generated CSAM leading to an absolute explosion that overwhelms the clear web.”

Mr Ray-Hill stated that the rise in such content could drive criminal activities, including child sexual abuse, trafficking, and modern slavery.

He added that offenders are increasing the circulation of CSAM by reusing images of known victims in AI-generated content, avoiding the need to exploit new children.

What new laws has the UK introduced to curb AI-abusive content?

  • Illegal to possess, create, or distribute AI tools made to generate child sexual abuse material (CSAM).
  • Offenders face up to 5 years in prison for using or distributing such AI tools.
  • Illegal to possess manuals or guides that explain how to use AI to create abusive images or exploit children.
  • Possessing such manuals carries up to 3 years in jail.
  • Running or moderating websites that share abusive images or give advice to offenders is also banned.
  • Border Force gains new powers to force suspects to unlock devices during checks.
  • Law applies to tools that “nudify” children or apply children’s faces onto abusive images.
  • AI-generated fake nudes or deep fakes of minors are treated as abuse material.
  • AI used to blackmail children or groom them is now part of the legal crackdown.
  • The new laws are part of the upcoming Crime and Policing Bill.

What did Yvette Cooper say about tackling AI-driven abuse?

Home Secretary Yvette Cooper said the law was vital to stop AI-fuelled abuse before it turned into real-world harm.

She added,

“We know that predators operating online often escalate to committing horrific crimes in person. It is vital that we tackle child sexual abuse both online and offline to better protect the public from emerging threats.”