UK Tech Companies and Child Protection Agencies to Examine AI's Ability to Generate Abuse Images
Technology companies and child safety agencies will be granted permission to evaluate whether artificial intelligence systems can generate child abuse material under recently introduced UK laws.
Significant Increase in AI-Generated Harmful Material
The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
Updated Regulatory Structure
Under the amendments, the government will permit approved AI developers and child protection organizations to examine AI systems – the underlying technology for chatbots and image generators – and ensure they have sufficient safeguards to stop them from producing depictions of child exploitation.
"Fundamentally about stopping abuse before it occurs," stated Kanishka Narayan, adding: "Specialists, under strict protocols, can now detect the risk in AI systems promptly."
Tackling Legal Challenges
The amendments have been implemented because it is illegal to create and own CSAM, meaning that AI developers and others cannot generate such images as part of a evaluation regime. Until now, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This legislation is aimed at averting that issue by enabling to halt the production of those images at their origin.
Legal Structure
The amendments are being added by the authorities as modifications to the crime and policing bill, which is also implementing a prohibition on owning, producing or distributing AI models developed to generate exploitative content.
Real-World Impact
This week, the official toured the London base of a children's helpline and listened to a simulated call to counsellors involving a account of AI-based exploitation. The interaction portrayed a adolescent seeking help after facing extortion using a explicit AI-generated image of himself, constructed using AI.
"When I hear about young people experiencing blackmail online, it is a source of extreme frustration in me and justified anger amongst families," he stated.
Concerning Statistics
A leading online safety organization reported that cases of AI-generated exploitation material – such as webpages that may include multiple images – had significantly increased so far this year.
Cases of category A material – the most serious form of exploitation – rose from 2,621 images or videos to 3,086.
- Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
- Portrayals of infants to toddlers rose from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "represent a vital step to guarantee AI tools are safe before they are launched," stated the head of the online safety foundation.
"AI tools have enabled so survivors can be victimised all over again with just a simple actions, giving offenders the ability to create possibly limitless amounts of advanced, lifelike exploitative content," she continued. "Material which additionally exploits victims' suffering, and makes young people, particularly female children, less safe both online and offline."
Support Session Information
Childline also released details of support interactions where AI has been referenced. AI-related harms mentioned in the conversations include:
- Employing AI to evaluate weight, body and appearance
- AI assistants dissuading young people from consulting safe guardians about harm
- Facing harassment online with AI-generated content
- Online extortion using AI-manipulated pictures
During April and September this year, Childline conducted 367 counselling interactions where AI, chatbots and related topics were mentioned, significantly more as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 sessions were connected with mental health and wellbeing, encompassing using chatbots for assistance and AI therapeutic apps.