British Technology Companies and Child Protection Agencies to Test AI's Capability to Create Abuse Content

Technology companies and child protection organizations will be granted permission to evaluate whether artificial intelligence tools can produce child exploitation images under recently introduced UK legislation.

Substantial Increase in AI-Generated Harmful Content

The declaration coincided with revelations from a protection watchdog showing that reports of AI-generated child sexual abuse material have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Framework

Under the changes, the government will allow designated AI companies and child protection organizations to examine AI models – the foundational technology for chatbots and visual AI tools – and ensure they have sufficient safeguards to prevent them from producing images of child sexual abuse.

"Ultimately about stopping abuse before it occurs," declared the minister for AI and online safety, adding: "Specialists, under strict conditions, can now identify the risk in AI models early."

Addressing Legal Challenges

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot create such content as part of a testing process. Previously, officials had to wait until AI-generated CSAM was published online before dealing with it.

This law is designed to averting that issue by enabling to halt the production of those materials at source.

Legislative Framework

The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also establishing a ban on possessing, creating or sharing AI systems designed to create exploitative content.

Real-World Impact

This week, the minister toured the London base of a children's helpline and heard a mock-up call to advisors involving a report of AI-based exploitation. The call portrayed a teenager seeking help after facing extortion using a explicit deepfake of himself, constructed using AI.

"When I hear about children experiencing extortion online, it is a cause of intense anger in me and justified anger amongst parents," he stated.

Concerning Data

A leading internet monitoring foundation reported that instances of AI-generated exploitation material – such as webpages that may include numerous images – had significantly increased so far this year.

Cases of the most severe material – the gravest form of exploitation – rose from 2,621 images or videos to 3,086.

  • Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Sector Response

The law change could "represent a vital step to ensure AI tools are secure before they are released," commented the head of the internet monitoring foundation.

"AI tools have enabled so survivors can be targeted repeatedly with just a few clicks, giving criminals the capability to create potentially limitless quantities of advanced, photorealistic exploitative content," she continued. "Material which further commodifies victims' suffering, and makes young people, particularly female children, less safe on and off line."

Support Session Data

The children's helpline also published details of support sessions where AI has been referenced. AI-related risks discussed in the conversations comprise:

  • Using AI to rate weight, body and appearance
  • AI assistants discouraging children from consulting trusted adults about abuse
  • Being bullied online with AI-generated content
  • Digital extortion using AI-manipulated images

Between April and September this year, Childline delivered 367 support sessions where AI, chatbots and associated topics were discussed, four times as many as in the same period last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellness, encompassing using chatbots for assistance and AI therapeutic apps.

Jonathan Miles
Jonathan Miles

A seasoned journalist with a passion for uncovering stories at the intersection of technology and society.