What is it?
- Risk and safety assessment across platforms and upstream / downstream providers, in terms of both the risk to children but also for offender management.
Why is it important?
- A series of design decisions inform the development of a digital service or product. It is likely that these decisions are largely based on ensuring the wide promotion and use of the service or product in question. The extent to which design procedures assess and act on the risks and potential harm for children is unknown.[1]
- A preventative and proactive approach to ensuring that user safety is embedded into the design, development and deployment of online and digital products and services is required – Safety by Design. A Safety by Design approach necessitates the need for ongoing considerations of age-appropriate design and access to services, in line with the end-user’s maturity and evolving capacities
- Childhood has various cognitive stages and, subsequently, inherent vulnerabilities. This unregulated approach to design can create situations where children are engaging with subtle and ‘persuasive technology’[2] without the requisite understanding, awareness and maturity to manage any risks or harm that they may face. For example, without their or their parent’s knowledge children can be pushed towards increased engagement (including with individuals they do not know and adults) and lower privacy. Recommendation algorithms determine the content children see and/or the new ‘friend’ profiles that are suggested to them. Also, profiles are often set as public by default which can predetermine the visibility of children’s identity and interests without their knowledge, understanding or consent (from them or their parents).
- Children may be put at risk by pursuing public ‘likes’ or sharing content. Examples include liking or sharing abusive or bullying material (which could be deemed as a public endorsement) and exchanging sexual acts or imagery. In addition, in some circumstances, ‘commercial pressures’ from advertising or gaming platforms on children can be used as a grooming tool by offenders wishing to exploit children’s desire to compete in games and/or access ‘lootboxes’, game bonuses etc.
- This needs to be balanced with the need to empower and provide children and young people with controls and tools to manage their own experiences.
- Moreover, without visible channels for reporting abuse or causes for concern and accompanying information on every digital service or product, children (and adults) don’t know how, why and what to report. This raises the likelihood of a ‘cyberbystander effect’ and limited reporting.
- As one billion children use the internet, the potential risks associated with subtle, seemingly small and unnoticeable design features are maximised.[3]
- Additional approaches are required to deter offending, too. Key principles of situational crime prevention are:
- Reduce opportunity
- Increase risks (of getting caught)
- Reduce rewards
- Reduce provocations
How can it be implemented?
- All digital services or products should carry out child risk and impact assessments and safety review processes in the design phase (or retrospectively), and systematically thereafter, to understand the potential risks, and resulting implications, for children – as well as in terms of managing and mitigating potential offending behaviour.
- Risk assessments should consider a range of setting and use scenarios, assess how various design features interact when in use at the same time, and identify any potential resulting risks. Specific considerations in assessing scenarios and particular features could include: i) comparative assessment of the risks of public likes and whether a private ‘like’ feature reduces risks and simultaneously provides the required engagement with services and products, and ii) comparative assessment of the risks in scenarios where all children (all individuals up to the age of 18) are given safe and privacy-preserving settings by default.
- Vulnerability assessments can also be carried out to understand the vulnerabilities of different groups of children to online sexual abuse and exploitation. For example, factors to consider include socio-economic status, location, age, education level, disability, and family status (e.g. in alternative care).
- Child risk assessments and vulnerability assessments should aim to reflect children in both specific groups and a collective manner, they should also reflect the differences between children and intersectional analysis (e.g. gender, age, location, language, ethnicity etc.). All assessments should include children’s own voices and perspectives.
- A collective set of minimum approved risk levels should be made transparent and agreed to by core actors and changes based on the child risk assessment findings should be made where necessary.
- Ideally, a standard basic format for child risk assessments should be developed and shared amongst actors to support those who do not have the capacity or resources and to promote an industry-wide basic minimum child risk assessment framework (or key contents).
Further resources:
- eSafety Commissioner (Australia), global Safety by Design initiative.
- Lucy Faithfull Foundation, Services.
- Lucy Faithfull Foundation, Preventing child sexual abuse.
- 5Rights Foundation, Risk-by-Design microsite.
- UNICEF, Recommendations for online gaming industry
[1] In the UK the Age appropriate design code will become law on 2 September, 2020: Information Commissioners Office (UK) (2020), Age-appropriate design: a code of practice for online services.
[2] The technology designed with the underlying motive of modifying a certain attitude of behavior, exploiting psychological and sociological theories, such as persuasion and social influence.
[3] Informed by: 5Rights Foundation, Risk-by-Design microsite.