We should all be concerned about the use of predatory AI
Recent news coverage has reported more than 200 artists including Jon Bon Jovi, Billie Eilish, Stevie Wonder, Pearl Jam, Mumford & Sons have signed an open letter protesting the potential harm artificial intelligence (AI) poses to artists.
The letter, put out by the organisation Artist Rights Alliance, highlights that “when used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities and our livelihoods… Unchecked, AI will set in motion a race to the bottom that will degrade the value of our work and prevent us from being fairly compensated for it,” the letter continues. “This assault of human creativity must be stopped.”
When used irresponsibly, AI poses enormous threats to our ability to protect our privacy, our identities and our livelihoods…
Artist Rights Alliance
Yet the risks posed by the misuse of AI go far beyond this. Last month, Channel 4 News found 4,000 celebrities including female actors, TV stars, musicians and YouTubers, have had their faces superimposed on to pornographic material using artificial intelligence.
Recently, the pop star Taylor Swift became a target of deepfakes. In response, a coalition of US senators introduced a bill criminalising the dissemination of AI-generated explicit content without consent. Under this proposed legislation, individuals depicted in digitally manipulated nude or sexually explicit content, referred to as “digital forgeries”, would have the right to pursue a civil penalty. This penalty could be enforced against those who intentionally created or possessed the forged content with the intent to distribute it, as well as those who knowingly received such material without the subject’s consent.
Last year, our Global Threat Assessment highlighted that children are already being put at increased risk due to AI intensifying offending and extending the time and resources required by law enforcement to identify and prosecute offenders and safeguard children. This trend is set to worsen – unless we take collective action now.
Yes, AI offers many exciting possibilities to revolutionalise our lives. But it must not be at any cost. Just as AI models can generate deepfake non-consensual pornographic images or steal from artists, they can also generate photorealistic child sexual abuse material – using synthetic imagery featuring fictitious children, avatars of children and those which include real children.
At the simplest level, AI allows perpetrators to generate hundreds of child sexual abuse images at an industrial scale in seconds with the click of a button.
This explosion of content will make it increasingly difficult for law enforcement to identify whether or not there is a real child in danger and has significant implications for law enforcement.
Offenders have the potential to use AI tools to groom children at scale. We also know AI-generated child sexual abuse material also plays a significant role in the normalisation of offending behaviour and will potentially create a more permissive environment for perpetrators, putting increased children at risk.
There is also evidence that AI-generated child sexual abuse material has increased the potential for the re-victimisation of known child sexual abuse victims as their images are used over and over again.
Strengthening global responses across law enforcement cooperation, legislative change and regulatory approaches are critical. Industry and tech platforms also have a responsibility to get ahead of this rapidly evolving threat.
Now is the time for safety by design and ensuring children are protected as generative AI tech is built. From removing harmful content from training data, using AI classifiers and manual review, through to watermarking content, solutions are available.
No matter who you are – an artist, a child, a celebrity, a government minister or a tech platform – the internet and digital platforms should be a safe place for everyone. Now is the time for us to support global, united action to make AI a force for good rather than exploitation by criminals.
Otherwise we will soon reach a tipping point from which there is no return.
WeProtect Global Alliance brings together over 285 member organisations from governments, the private sector, civil society and intergovernmental organisations to collaborate and develop policies and solutions to protect children from sexual exploitation and abuse online.