UK Technology Companies and Child Safety Agencies to Test AI's Ability to Generate Abuse Images
Technology companies and child safety agencies will be granted authority to evaluate whether artificial intelligence systems can generate child exploitation images under recently introduced British legislation.
Significant Increase in AI-Generated Harmful Content
The announcement came as findings from a protection monitoring body showing that cases of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Structure
Under the amendments, the government will allow designated AI companies and child safety organizations to inspect AI models – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating depictions of child sexual abuse.
"Ultimately about stopping abuse before it occurs," stated the minister for AI and online safety, adding: "Specialists, under strict protocols, can now identify the danger in AI models early."
Addressing Legal Obstacles
The amendments have been introduced because it is against the law to produce and own CSAM, meaning that AI creators and others cannot create such images as part of a evaluation regime. Previously, officials had to delay action until AI-generated CSAM was published online before addressing it.
This legislation is designed to averting that issue by helping to halt the creation of those materials at their origin.
Legislative Framework
The amendments are being introduced by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on possessing, creating or distributing AI models developed to create child sexual abuse material.
Practical Consequences
This recently, the minister visited the London headquarters of Childline and heard a simulated conversation to counsellors featuring a account of AI-based abuse. The interaction portrayed a teenager requesting help after being blackmailed using a explicit deepfake of themselves, created using AI.
"When I hear about young people facing blackmail online, it is a cause of extreme anger in me and rightful anger amongst families," he said.
Alarming Data
A prominent online safety foundation reported that instances of AI-generated exploitation content – such as webpages that may contain multiple images – had more than doubled so far this year.
Cases of the most severe material – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
- Female children were overwhelmingly victimized, making up 94% of illegal AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "represent a crucial step to ensure AI products are safe before they are launched," commented the head of the internet monitoring organization.
"AI tools have made it so victims can be targeted repeatedly with just a simple actions, providing offenders the ability to make possibly limitless amounts of advanced, photorealistic exploitative content," she added. "Content which additionally exploits victims' suffering, and makes children, particularly female children, less safe on and off line."
Support Interaction Information
Childline also released details of support interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:
- Employing AI to evaluate body size, physique and appearance
- AI assistants dissuading young people from talking to safe adults about harm
- Being bullied online with AI-generated content
- Online blackmail using AI-faked pictures
Between April and September this year, Childline conducted 367 counselling sessions where AI, conversational AI and related terms were discussed, significantly more as many as in the equivalent timeframe last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, encompassing using chatbots for support and AI therapy applications.