British Technology Companies and Child Safety Agencies to Test AI's Ability to Create Abuse Images

Technology companies and child safety agencies will receive authority to assess whether AI systems can generate child abuse material under new UK laws.

Substantial Increase in AI-Generated Harmful Material

The declaration came as findings from a protection watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

Updated Regulatory Structure

Under the changes, the government will permit approved AI companies and child safety groups to examine AI systems – the underlying systems for chatbots and visual AI tools – and ensure they have adequate safeguards to stop them from creating depictions of child sexual abuse.

"Fundamentally about stopping exploitation before it happens," stated the minister for AI and online safety, noting: "Specialists, under rigorous conditions, can now identify the danger in AI models early."

Tackling Legal Obstacles

The changes have been implemented because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot create such content as part of a evaluation process. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is aimed at averting that issue by enabling to stop the production of those images at source.

Legislative Structure

The amendments are being introduced by the government as revisions to the crime and policing bill, which is also implementing a prohibition on possessing, creating or sharing AI systems designed to generate exploitative content.

Real-World Consequences

This recently, the official toured the London base of a children's helpline and listened to a mock-up call to counsellors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit deepfake of himself, created using AI.

"When I hear about children experiencing extortion online, it is a source of intense frustration in me and justified anger amongst families," he stated.

Concerning Statistics

A leading online safety organization reported that cases of AI-generated exploitation material – such as webpages that may contain numerous images – had significantly increased so far this year.

Cases of category A content – the most serious form of exploitation – rose from 2,621 visual files to 3,086.

  • Female children were predominantly targeted, accounting for 94% of illegal AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Response

The law change could "represent a vital step to ensure AI products are secure before they are launched," commented the head of the online safety foundation.

"Artificial intelligence systems have made it so survivors can be targeted all over again with just a few clicks, providing offenders the ability to create potentially limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further commodifies victims' suffering, and renders children, particularly female children, less safe on and off line."

Support Interaction Data

The children's helpline also released information of support sessions where AI has been mentioned. AI-related risks mentioned in the sessions include:

  • Using AI to evaluate weight, physique and looks
  • AI assistants discouraging children from consulting trusted guardians about abuse
  • Facing harassment online with AI-generated material
  • Online extortion using AI-faked pictures

Between April and September this year, the helpline conducted 367 support sessions where AI, conversational AI and related terms were mentioned, significantly more as many as in the equivalent timeframe last year.

Half of the mentions of AI in the 2025 interactions were related to mental health and wellness, encompassing using chatbots for support and AI therapeutic applications.

Kayla Moore
Kayla Moore

Lena is a seasoned software engineer with over a decade of experience in full-stack development and a passion for mentoring aspiring coders.