What is it?
Safety by design is an approach to product and service development in which safety is a central feature, focus, and area of concern from the beginning, rather than an afterthought or add-on after a product or service has been launched.
NCMEC's Position:
Governments should require and/or incentivize online platforms and device manufacturers to follow minimum safety by design principles to promote protection of children from online harms.
Why does it matter?
New products and services to facilitate online activities are often built using relatively new technologies with a primary focus on resolving an existing problem or need or creating new opportunities and consumer features. When safety considerations, especially for children, are considered only after a product or service is deployed, safety becomes dependent on the design and functionality of the project and often is disregarded. This leaves child users, and children who are not users, subject to harm.
Safety by design requires that safety considerations be a central concern from the beginning of a development process for a new product or service. This would include incorporating safety considerations in design decisions alongside other considerations such as intended functionality, preferred user experience, and data security. Child safety is advanced when online platforms incorporate features and functionality—specifically applicable to a new product or service being designed—to prevent, detect, disrupt, and facilitate reporting of online child sexual exploitation.
What context is relevant?
Safety by design is a concept that is widely accepted in global manufacturing and production of consumer products. As an example, common global safety standards for automobiles—including seatbelts and airbags—have become ubiquitous signs of safety by design in the automobile industry. Through legislative bodies and/or regulatory agencies, governments can impose safety standards for online platforms, just as they have done for many years relating to automobiles, household appliances, food products, and pharmaceuticals. Safety by design in the online space is needed to create a similar global impact on protecting children from online harms.
Since 2018, the eSafety Commissioner in Australia has advocated for safety by design in the technology sector, specifically regarding online platforms. The eSafety Commissioner advocates for three key safety by design principles: service provider responsibility, user empowerment and autonomy, and transparency and accountability. Through its regulatory authority, the eSafety Commissioner also enforces online platforms’ legal obligations under Australian law.
Discussions about online safety—similar to those about data privacy—often focus on protecting users, but many children who are exploited online are not users of the platforms on which they are harmed. Similarly, while device manufacturers often highlight the safety and security features in their products—such as Apple’s crash detection and on-device encryption of biometric data—most devices largely lack features to protect children from an offender’s use of a device to create CSAM, share CSAM with other offenders, or commit other acts of child sexual exploitation.
Children are harmed not only when offenders communicate directly with them for exploitative purposes, but also when offenders use online services to commit CSAM-related offenses without engaging children directly. Children are harmed when an offender coerces them to create exploitative images of themselves, and when offenders produce CSAM depicting a child who isn’t using a device. Safety by design efforts that focus exclusively on user safety overlook threats to children who are not users.
During a March 2024 hearing before a U.S. congressional committee, NCMEC identified the failure by generative artificial intelligence (GAI) companies to adhere to safety by design principles as a factor contributing to online child exploitation. Four months later, tech non-profit organizations Thorn and All Tech is Human collaborated with eleven GAI developers to publicly commit to safety by design principles to “guard against the creation and spread of AI-generated child sexual abuse material…and other sexual harms against children.” A formal white paper accompanied the public announcement to outline the principles and share strategies for how various stakeholders can implement them.
For platforms that were not developed with safety by design principles, “safety by redesign” is possible through thoughtful assessment of risks and development and implementation of mitigation measures to improve safety for both users and non-users.
What have survivors said about it?
Survivors support requiring safety by design but have expressed concerns about enforcement and accountability for compliance. They commend legislation enacted in Australia (the Online Safety Act 2021) and introduced in Canada (Bill C-63, the Online Harms Act, 2024) as examples of legal frameworks that, at least in part, promote safety by design.
Online platforms should engage survivors as consultants in support of developing safety by design efforts. Developers should consider the safety of anyone who might be harmed through the misuse of their products and services, not only their users.
Technology should be designed with safety as a priority. Companies should consider the experiences of survivors and their families and be proactive about implementing safety measures into their technology.
- Survivor
Social media platforms continue to roll out new products, changes or additional features to old products, and software updates all the time. Why do my needs as a survivor continue to be left out of these updates simply because I do not use their services? Where is the line drawn for whose safety doesn't matter when there is still content being circulated of innocent victims and survivors who never subscribed, and/or never took part in the upload, to these online platforms?
- Survivor
What drives opposing viewpoints?
Arguments opposed to safety by design for online child protection raise concerns about free speech being impaired by potential over-moderation through content screening and limitations on access to legal content. Other criticisms are rooted in privacy concerns (especially where content moderation is concerned), doubts about the effectiveness of safety by design measures, concerns that incorporating safety by design considerations will slow innovation, and the financial cost of implementation.