Navigating AI regulation: mitigating the risks of generative AI in producing child sexual abuse material
By Shailey Hingorani, Head of Policy, Advocacy and Research, WeProtect Global Alliance
June 2024
Last month, the Federal Bureau of Investigation (FBI) charged a US man with creating more than 10,000 AI-generated sexually explicit and abusive images of children. And it’s not just adult perpetrators who are using AI. Cases of teenage boys non-consensually creating and sharing nude photos of their female and male classmates and/or teachers have been reported in the United States, Australia and Spain.
In 2023, the National Center for Missing & Exploited Children (NCMEC) received 4,700 reports concerning AI-generated child sexual abuse material. While this is currently a relatively small number, last year researchers found that popular AI image-generators had been trained on datasets that contained child sexual abuse imagery. These images are likely to have made it easier for AI systems to produce ‘new’ child sexual abuse material. The ease with which AI can be used means that child sexual abuse material can be produced on an industrial scale, with very little technical expertise.
To address this, countries worldwide are adopting different regulatory approaches. This blog explores three common approaches to AI regulation, examining their principles, examples and potential effectiveness in mitigating the use of AI for harmful purposes.
Risk-based regulation
A prominent approach to AI regulation is risk-based, where regulatory measures are tailored to the perceived risks associated with different AI applications. This model ensures that higher-risk AI systems are subject to stricter oversight, while lower-risk systems face fewer restrictions.
The European Union (EU) exemplifies this approach with its proposed AI Act, which categorises AI applications into different risk levels. High-risk AI systems, such as those that negatively affect safety, must comply with stringent baseline requirements, including robust data protection, transparency, and accountability measures such as risk assessments.
Another example of a jurisdiction that might potentially adopt this approach is Brazil. Its proposed AI regulation also categorises AI systems according to different levels of risk (for example, excessive or high risk), and requires every AI system to be risk assessed before being released to the market.
For generative AI that could create harmful content, including child sexual abuse material, these regulations mandate strict oversight to prevent abuse, with safety measures that can be implemented both at the developer and deployer levels.
The risk-based approach has the potential to be effective in addressing the misuse of generative AI for harmful purposes by ensuring clear and enforceable liability obligations and safety measures. However, its success depends on effective implementation of regulations, development of common standards on transparency measures, risk assessment and watermarking of generated content, etc. and the ability to adapt to emerging risks.
Principle-based frameworks
Another common regulatory approach involves establishing comprehensive ethical frameworks that guide the development and deployment of AI technologies. These frameworks emphasise core principles such as human rights, transparency, and sustainability.
The United Kingdom has developed a principles-based, non-statutory, and cross-sector AI framework. The UK’s approach integrates broad ethical standards with sector-specific regulations to address unique risks in different areas. The framework outlines five principles: safety, appropriate transparency and explainability, fairness, accountability and governance, and contestability and redress. These principles guide existing regulators in the responsible design, development, and use of AI.
Regulators are expected to adopt a proportionate, context-based approach, leveraging existing laws and regulations. Ofcom, the UK’s communications regulator, in its strategic approach to AI published in March 2024, highlights its regulatory powers by noting that the Online Safety Act (OSA), which already mandates that in-scope services take proportionate measures to prevent exposure to illegal or harmful content, could also encompass AI-generated child sexual abuse material.
Another country using this approach is Singapore, which has drafted a Model AI Governance Framework for Generative AI, which seeks to provide guidance on suggested practices for safety evaluations of generative AI models.
Comprehensive ethical frameworks provide a solid foundation for responsible AI use. By promoting ethical principles, these frameworks help prevent the misuse of AI for creating harmful content. However, their effectiveness relies on widespread adherence and the integration of these principles into enforceable regulations.
Sector-specific regulations
Given the diverse applications of AI, some jurisdictions implement sector-specific regulations alongside general AI guidelines. This dual approach addresses the unique challenges and risks of AI use in different sectors.
The United States combines broad AI guidelines (e.g. the White House Executive Order on AI) with sector-specific regulations, including multiple efforts at the state level (e.g. Bills in California, Texas and New York among others).
While there is no comprehensive federal legislation or regulations in the US on the development, deployment and use of AI, there are existing federal laws that address specific uses of AI. The PROTECT Act, for instance, specifically targets the production and distribution of child sexual abuse material, including AI-generated content.
Sector-specific regulations effectively address the distinct risks associated with AI applications in various sectors. By focusing on targeted measures, these regulations can mitigate the misuse of generative AI for creating harmful content. However, the success of this approach depends on effective coordination and enforcement across sectors.
Voluntary collaboration
Several leading AI companies, including Adobe, Amazon, IBM, Google, Meta, Microsoft, OpenAI, and Salesforce, have voluntarily pledged to promote the safe, secure, and transparent development of AI technology. These companies have committed to conducting internal and external security testing of AI systems before release, sharing information on managing AI risks, and investing in various safeguards. Additionally, several of these companies have signed up to Thorn’s Safety by Design principles on generative AI.
As highlighted in our latest Global Threat Assessment, cross-sector voluntary collaboration remains critical to enable responsiveness, drive innovation and centre the voices of children and survivors. This should be as transparent as possible to enable greater accountability and user confidence. We see this as a vital complement to regulation. Initiatives like the Global Online Safety Regulators’ Network will encourage growing alignment of regulation and improvement of inter-institutional coordination. Innovation and profits should never come at the expense of the safety and security of children using these platforms, tools and services.
While it is encouraging to see these companies taking proactive steps, the effectiveness of voluntary regulation remains uncertain. Voluntary action and collaboration will remain a critical complement to legislation and their success heavily depends on the companies’ willingness to adhere to their commitments. Critics argue that without mandatory regulations, there is a risk that some companies may prioritise innovation and profitability over safety and security. Therefore, it remains to be seen how effective these voluntary efforts will be in mitigating the risks associated with AI technology.
Conclusion
The regulation of AI, particularly in preventing its misuse for creating child sexual abuse material, requires a multifaceted approach. Risk-based regulation, comprehensive ethical frameworks, and sector-specific regulations each offer valuable strategies to address these challenges. The EU, UK, and US provide examples of these approaches in action, prioritising principles such as human rights, transparency and accountability.
While each approach has its strengths, their effectiveness ultimately hinges on rigorous implementation, adaptability to new risks and international cooperation. As AI technology continues to evolve, so too must the regulatory frameworks that set minimum standards, ensuring the benefits of AI are realised without compromising safety and ethical standards.