Artificial Intelligence and ‘Gen AI’

Video preview

The role of Artificial Intelligence (AI) in proliferating and tackling child sexual abuse content online

the issue

What is Artificial Intelligence?

Artificial Intelligence (AI) is a broad term to describe powerful digital technologies that have some of the qualities of the human mind, such as the ability to understand language, recognize pictures, solve problems and learn, as defined in the Cambridge English Dictionary.

AI computer systems have existed in various forms for many decades but are now more widely available to consumers in the form of chatbots, search engines and image generation tools, to name a few. 

‘Generative AI’, also called ‘Gen AI’, refers to AI technologies with the ability to create new content including text, images, audio and video. Today, many Gen-AI tools are available to anyone with access to the internet; many are free to use and require no technical expertise. 

While generative AI is a groundbreaking technology with transformative potential, proactive measures are critical to prevent and combat its misuse in creating child sexual abuse material (CSAM). Safeguarding children requires a combination of stringent regulations, advanced technology, and global collaboration. 

The rise of AI generated CSAM

We Protect Global Alliance’s recent Global Threat Assessment report revealed a rise in the use of GAI to create child sexual abuse material since early 2023. 

Key concerns around Gen AI include: 

  1. Creation of synthetic CSAM 
    • Generative AI, particularly models for image or video generation, could be exploited to create realistic but synthetic CSAM. This content may not involve real children but still poses ethical and legal challenges. 
    • Such materials can perpetuate harm by normalizing abuse or serving as a gateway for offenders. 
  1. Deepfake technology 
    • Generative AI can manipulate existing images or videos, potentially creating fake CSAM by altering the faces, bodies or voices of minors inappropriately. This raises concerns about victimization of individuals whose likenesses are used without consent. 
  1. Detection challenges
    • The synthetic nature of AI-generated CSAM complicates detection, as traditional tools that rely on hashing (like PhotoDNA) to identify known CSAM may not recognize new, AI-generated materials. 
    • Law enforcement and platforms must adapt their detection systems to identify and block synthetic content. 

What are the risks?

AI-generated child sexual abuse material (CSAM) poses significant risks by enabling the creation of highly realistic, synthetic content that can normalize exploitation, evade detection, and fuel the CSAM market.  

Deepfake technology can also target individuals, causing severe psychological harm and enabling blackmail or harassment.  

The global legal landscape struggles to address this emerging threat, particularly as traditional detection methods fail against synthetic content.  

To combat these dangers, it is critical to implement stringent regulations, advance detection technologies, establish safeguards in AI tools, and foster international collaboration to protect children and society from this evolving risk. 

Who are the offenders?

Offenders use generative AI to groom, manipulate, and exploit children, as well as to create CSAM for profit or to fuel their predatory behaviour. These offenders include paedophiles, cybercriminals, and organised criminal networks. Generative AI allows them to create synthetic content without involving real children, offering anonymity and helping them evade traditional detection methods.  

Criminal networks also use AI to facilitate the mass production and distribution of CSAM, often in regions with weaker laws. The rapid scale and ability to bypass detection make generative AI a growing challenge, highlighting the need for advanced detection tools, stronger laws, and global cooperation to protect children. 

There can be overlaps between AI-generated child sexual abuse and online grooming and related crimes such as sextortion. It is also possible for perpetrators to generate hundreds of abuse images from just one piece of self-generated sexual material, or even to manipulate an innocent picture using ‘nudification’ apps. 

The existing research and data also suggest that there are links between Gen-AI child sexual abuse and peer-to-peer abuse, where young people are using widely available image-generation technologies to create sexual abuse content of peers at home, in education settings and in online spaces. 

Victims and impact

Many children may not understand the risks of generative AI, making them vulnerable to predators who use AI to create fake interactions or manipulate their trust. Young people seeking online validation are especially at risk. Offenders often target children in online gaming communities and virtual spaces, where AI-generated avatars or bots appear friendly and harmless, making it hard for children and young people to recognise potential dangers.  

Children with limited digital literacy or those in low-resource areas are even more vulnerable, as they may not fully grasp online safety or have the support to report exploitation. 

AI-generated child sexual abuse material can cause the same distress to victims as traditional forms of abuse material and sexual exploitation and the related trauma can be severe and long-lasting. The mental health impact of this form of abuse may be particularly acute in cases of peer-to-peer abuse or instances where images are shared with friends, family or peers of the victim and used to humiliate or shame.  

In some instances, Gen-AI abuse content is used to blackmail the victim into sending goods or money, a crime known as sextortion or financial sexual extortion. Tragically, there have been a number of reported suicides that relate to sextortion cases and Gen-AI sexual abuse and exploitation.  

Statistics

31 million reports of suspected CSAM received by NCMEC in 2023

Source: CyberTipline 2023 Report, NCMEC

>20,000 AI-generated child abuse images on a dark web forum within a single month in report by the Internet Watch Foundation.

2024 Update: Understanding the Rapid Evolution of AI-Generated Child Abuse Imagery, IWF

The FBI believes more than 20 youth suicides in the US have been directly related to sextortion schemes over the past three years.

Source: FBI Press Release

Children as young as 12 years being targeted to create extreme sexual and violent content, Federal Police in Australia warned.

Source: AFP Press Release, September 6 2024

THE GLOBAL RESPONSE NEEDED

To respond more effectively to generative AI child sexual abuse material (CSAM), a multi-faceted approach is needed that involves technology, law enforcement, policy, and education.  

Key actions include: 

1. Advanced detection tools

Developing AI-driven systems to detect synthetic CSAM is crucial. These systems must be capable of identifying subtle patterns in AI-generated content that traditional detection tools can’t catch. 

2. Stronger legal frameworks

Laws must be updated to specifically address the creation, distribution, and possession of AI-generated CSAM. This includes ensuring that both real and synthetic abuse material are treated equally under the law.

3. International collaboration

Offenders often operate across borders, so global cooperation among law enforcement, governments, and tech companies is essential to track offenders and remove harmful content quickly.

4. Tech responsibility

Platforms that host or share AI-generated content must take proactive measures to prevent abuse. This includes implementing safeguards to detect and block harmful content and ensuring that AI tools are not misused for exploitation.

5. Digital literacy education

Educating children, parents, and caregivers about the risks of AI-generated exploitation is critical to raising awareness and improving online safety. This includes teaching children how to recognize and report suspicious behaviour.

6. Research and innovation

Continuous research into the evolving capabilities of generative AI and its use in CSAM creation is necessary to stay ahead of offenders and develop new prevention strategies.

Many Alliance members are already taking actions outlined above. You can read more about what Alliance members are doing in response on this page.  

AI for good

Although AI technologies can be misused to cause harm, they can also be used to support the fight against child sexual abuse online. For example, AI technologies are already being used to help moderate, identify and classify harmful content online. This can dramatically speed up the identification and removal of CSAM as well as reduce human moderators from being exposed to sustained extreme content which can have lasting health and wellbeing impacts.  

AI tools such as chatbots can also be used to promote socially-positive online behaviours which can help prevent harm and provide guidance to children and young people. AI tools can also block abusive content from being posted or block accounts associated with online abuse.  

This is a global emergency. The misuse of generative AI spans borders, making international collaboration essential. We cannot stand idly by as generative AI is weaponized against the most vulnerable members of our society. This is not just a crisis of technology; it’s a moral emergency. Every second we delay, more children are victimised.

Baroness Joanna Shields OBE, founder of WeProtect