Online grooming

Understanding online grooming

What is online grooming?

Online grooming refers to the tactics abusers use on the internet to sexually exploit children. This process can be swift or gradual, fundamentally involving building trust to manipulate and exploit, using fear and shame to keep the child silent. Recognizing and addressing this threat is crucial to safeguarding children.

The role of technology in grooming

While grooming has always existed, the rise of digital platforms has expanded abusers’ reach and opportunities. Predators follow children to their digital spaces, making online grooming a threat across various platforms. The internet has normalized communication with strangers, adding complexity to the threat.

Online grooming has evolved particularly insidiously within social gaming environments. Research from risk intelligence organisation Crisp (now Resolver) reveals that individuals seeking to abuse children in these environments are able to lock them into high-risk grooming conversations in as little as 19 seconds after the first message, with an average time of just 45 minutes.

Where does grooming happen?

Online grooming can occur almost anywhere children interact online. Many perpetrators identify targets on social media, in chat rooms, gaming environments, and other platforms that allow user-to-user communication. Predators may create fictional personas to build kinship or portray themselves as trustworthy adults, exploiting innocent interactions and pushing boundaries over time.

Perpetrators often move conversations to private messaging apps or end-to-end encrypted environments, a technique known as ‘off-platforming,’ to reduce the risk of detection.

“Perpetrators divert conversations to a private messaging app or an end-to-end encrypted environment due to the lower risk of detection”

Grooming and coercing children to produce ‘self-generated’ sexual material

Research suggests that prevalence rates for online grooming range between 9-19%. Most studies show greater grooming online among girls, though the gender difference is less marked among children under 13.

Perpetrators are less likely to continue grooming if they believe the children are under parental guardianship, highlighting the importance of examining risks and vulnerabilities in children’s lives. A multi-sectoral response between tech companies, law enforcement, and governments is necessary to detect and prevent online grooming. While parental care is a protective factor, the responsibility for preventing child sexual abuse cannot be placed solely on parents.

Our joint study with Economist Impact of 2,000 18-year-olds across four European countries, found 54% of respondents who received sexually explicit material received at least some through a private video sharing service, and 46% through a private messaging service.

Reporting data from the National Society for the Prevention of Cruelty to Children (NSPCC) shows that online grooming crimes have risen by 80% in the past four years.

Only 37% of tech companies surveyed used tools to detect the online grooming of children, according to a 2021 survey we conducted with the Tech Coalition.

The response: addressing the threat

Technical challenges and solutions

Detecting online grooming presents technical challenges, but solutions exist. Effective AI tools for detecting grooming need access to chat content to train algorithms. These tools must also detect grooming in different languages and understand slang and codewords.

Preventive measures and Safety by Design

Solutions that detect online grooming before it happens are most effective. Safety by Design solutions, such as age estimation and age verification tools, are at the forefront of this preventive approach. However, deeper knowledge of the threat is needed to implement better prevention and detection measures.

Online grooming is a complex and pervasive issue requiring global cooperation, technological advancements, and legal reforms to combat effectively. Awareness, prevention, and stringent measures are crucial in addressing and mitigating this threat.

Page last updated on 24th November 2024