British Technology Companies and Child Protection Officials to Examine AI's Capability to Create Abuse Content

Tech firms and child protection organizations will be granted authority to assess whether artificial intelligence tools can produce child exploitation material under recently introduced UK laws.

Substantial Rise in AI-Generated Illegal Content

The declaration coincided with revelations from a safety monitoring body showing that cases of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the government will allow approved AI developers and child protection groups to examine AI systems – the foundational systems for chatbots and image generators – and ensure they have sufficient protective measures to stop them from producing images of child exploitation.

"Fundamentally about preventing exploitation before it occurs," declared Kanishka Narayan, adding: "Experts, under rigorous conditions, can now detect the danger in AI systems early."

Tackling Regulatory Obstacles

The changes have been implemented because it is illegal to produce and possess CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation process. Until now, officials had to delay action until AI-generated CSAM was published online before addressing it.

This legislation is aimed at preventing that issue by helping to halt the production of those images at their origin.

Legislative Framework

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or distributing AI models developed to generate exploitative content.

Practical Impact

This recently, the official visited the London headquarters of Childline and listened to a mock-up call to counsellors featuring a account of AI-based exploitation. The call portrayed a adolescent requesting help after facing extortion using a explicit AI-generated image of himself, constructed using AI.

"When I learn about children experiencing extortion online, it is a cause of extreme anger in me and rightful anger amongst families," he said.

Concerning Data

A prominent online safety foundation reported that cases of AI-generated abuse material – such as online pages that may include numerous images – had significantly increased so far this year.

Instances of category A material – the gravest form of exploitation – increased from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, making up 94% of prohibited AI images in 2025
  • Depictions of newborns to toddlers increased from five in 2024 to 92 in 2025

Sector Reaction

The legislative amendment could "represent a crucial step to ensure AI products are safe before they are released," commented the chief executive of the internet monitoring organization.

"Artificial intelligence systems have made it so survivors can be victimised repeatedly with just a few clicks, providing criminals the ability to create potentially endless quantities of advanced, lifelike exploitative content," she added. "Content which additionally exploits victims' trauma, and makes young people, especially female children, less safe both online and offline."

Support Interaction Data

The children's helpline also released details of support sessions where AI has been referenced. AI-related risks mentioned in the conversations include:

  • Using AI to rate body size, body and appearance
  • Chatbots discouraging children from talking to safe adults about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated pictures

Between April and September this year, Childline conducted 367 support interactions where AI, conversational AI and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.

Fifty percent of the mentions of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for assistance and AI therapy apps.

Jared Williams
Jared Williams

Elara is a seasoned software engineer and tech writer, passionate about demystifying complex technologies and sharing actionable advice.