New film exposes AI’s role in online child sexual exploitation and calls for urgent global action
MEDIA RELEASE
Friday 17 January 2025
WeProtect Global Alliance today unveiled a compelling short film, Protect Us, exposing the harrowing ways generative AI applications and chatbots are being weaponised to exploit children online. Premiering today at the DLD Conference in Munich, the film calls on global leaders to take decisive action to safeguard children and young people.
“Generative AI has revolutionised the creation of hyper-realistic text, images, audio, and video, transforming the digital landscape but also introducing unimaginable dangers for children,” said Baroness Joanna Shields OBE, founder of WeProtect Global Alliance and executive producer of the film. “Predators are harnessing these technologies to create fake yet convincing sexually explicit images and child sexual abuse material (CSAM), manipulate children through grooming tactics, and infiltrate digital spaces with devastating consequences. This film lays bare the scale of the crisis and underscores the need for unified action.”
The urgency cannot be overstated. It is estimated that over 300 million children worldwide last year alone were victims of sexual exploitation and abuse online. But numbers, no matter how staggering, cannot convey the human cost of this crisis. Behind each statistic is a moment frozen in time—a moment of fear, shame, and helplessness that will stay with that child forever.
Adding to the complexity of the threat is the alarming rise in peer-on-peer harm and the proliferation of self-generated sexual images among young people. A recent report by Thorn, a WeProtect Global Alliance member, reveals that 1 in 10 children are aware of peers using generative AI to create non-consensual intimate images of others, highlighting the growing prevalence of these behaviours and their devastating impact on victims. The sheer volume of harmful content being generated is overwhelming reporting systems and hampering law enforcement investigations.
“The trauma of these crimes is not only deeply personal but also indelible. Images and interactions are shared and algorithmically amplified across the digital landscape, perpetuating harm long after the initial violation,” Shields said.
Protect Us is not just a film—it is a plea to confront this moral emergency and act decisively to prevent further harm to children. Premiering at a pivotal moment in the fight against online child sexual exploitation, the film emphasises the critical need for governments, technology companies, and society to unite in addressing this growing crisis.
“Children are not miniature adults. They lack the capacity to discern real from fake or safe from harmful online. We cannot expect them to navigate these dangers alone. Instead, we must build online spaces that are age-appropriate and inherently protect children and teens until they are old enough to make informed decisions.”
The film can be viewed at https://youtu.be/OuH-D-au1Ho
The film was produced by SHFT who specialise in digital – from high-impact social content to campaign driven documentaries for streaming platforms.
For further information, interview requests, or access to the film, please contact: Michelle Jeuken, Head of Communications and Engagement, WeProtect Global Alliance gro.agtcetorpew @ellehcim
ADDITIONAL BACKGROUND
Key trends
- Explosion of child sexual abuse material (CSAM): We Protect Global Alliance’s recent Global Threat Assessment report revealed a rise in the use of GenAI to create child sexual abuse material since early 2023.
Additionally, recent reports from the Stanford Internet Observatory provide concerning statistics about AI-generated CSAM and its impact on reporting systems:
- Proliferation of AI-Generated CSAM: A study in late 2023 revealed that some popular AI text-to-image generators were trained on datasets containing known CSAM images. This inclusion has enabled the production of photorealistic AI-generated explicit images.
- Challenges for detection systems: AI-generated CSAM does not match traditional hash databases like PhotoDNA, which were designed for identifying previously known CSAM. This makes detection and reporting significantly more difficult.
- Impact on reporting systems: Platforms like the National Center for Missing and Exploited Children’s (NCMEC) CyberTipline are overwhelmed by increasing reports, many of which now involve novel AI-generated content. Law enforcement struggles with prioritisation, slowing response times and limiting resources for victim identification.
In 2023, the National Center for Missing and Exploited Children (NCMEC) received over 31 million reports of suspected CSAM—a staggering number that continues to climb as AI tools make it easier to create and share such material. A report by the Internet Watch Foundation (IWF) found over 20,000 AI-generated child abuse images on a dark web forum within a single month.
These statistics highlight the urgent need for updated detection technologies, legislative frameworks, and cross-sector collaboration to mitigate the growing risks posed by AI-enabled exploitation.
- Law enforcement overwhelmed: INTERPOL has reported a 30% increase in case backlogs, as investigators grapple with distinguishing between real victims and AI-generated material. This delay costs precious time and resources in rescuing children from harm.
- Sophisticated grooming tactics: AI is enabling predators to create fake identities and deploy chatbots that mimic children’s behaviour with chilling accuracy. These tools accelerate the grooming process, making it faster and harder to detect.
- Accessibility of dangerous tools: Generative AI platforms capable of producing synthetic CSAM are widely available for little to no cost, putting advanced criminal tools in the hands of anyone with an internet connection.