country flag

Briefly: Voluntary Detection by Online Platforms

Introduction to the issue and NCMEC's position

What is it?

Online platforms have both business interests—independent of any government or regulatory mandate—and the technological means to screen or moderate content on their services to enforce their terms of service. In some jurisdictions, however, laws or regulations may restrict or prohibit, as a means of promoting user privacy, these voluntary efforts to screen online content.

NCMEC's Position:

Governments should allow online platforms to use various strategies to detect, prevent, disrupt, and report as appropriate all types of online child sexual exploitation, even if as an exception to broader or more general prohibitions on content screening.

Why does it matter?

Online platforms annually report to the CyberTipline tens of millions of online child sexual exploitation incidents, most of which they discover proactively and voluntarily. When user privacy laws or regulations prevent online platforms from voluntarily screening content for CSAM or other child sexual exploitation, children victimized through that content remain at risk of continued and often escalating harm because their victimization is not detected and reported.

Exceptions that allow online platforms to voluntarily detect CSAM and other forms of online child sexual exploitation create opportunities for crimes against children to be detected, reported, and interrupted so that children can be recovered from abuse and safeguarded.

What context is relevant?

Through legislation and regulation, governments can impose a variety of requirements and prohibitions on online platforms. At one extreme, some jurisdictions may seek to heavily censor online content relating to a range of political and societal issues, drawing opposition from free speech advocates. At the other extreme, pro-privacy policies that are not balanced with child safety considerations may lead to jurisdictions prohibiting online platforms from moderating any content at all, drawing opposition from those concerned about forms of “misinformation” or child safety.

Where prohibitions on online moderation place children at risk, advocates have promoted solutions that allow online platforms to maintain voluntary detection efforts by using proven technologies like cryptographic and perceptual hashing and image classifiers to detect, remove, and report suspected CSAM.

In late 2020, NCMEC led an international advocacy effort to address a then-pending legislative change in the European Union (EU) that would prohibit online platforms from conducting the customary type of content screening and moderation that led to the submission of millions of CyberTipline reports each year. NCMEC and other stakeholders called for the EU to implement a solution before the prohibition took effect, but EU legislators did not adopt a solution before the prohibition went into effect. The prohibition remained in place for several months, resulting in a negative impact on the detection and reporting of online CSAM relating to EU users.

An eventual solution was implemented by EU legislators—known as a “derogation”—that allowed online platforms to voluntarily detect CSAM. However, this was enacted only as a temporary solution, which now has been extended to April 2026 while legislators negotiate a permanent solution.

What does the data reveal?

The European Commission has noted that its assessment of the derogation’s impact has been complicated by inconsistent reporting by online platforms regarding their voluntary efforts to detect CSAM.

NCMEC conducted an assessment 18 weeks immediately after the December 2020 enactment of the provisions prohibiting voluntary detection and observed a 58% decrease in EU-related CyberTipline reports. Throughout 2021, that trend continued, even as the temporary derogation eventually was implemented on August 2, 2021. By the end of 2021, as global CyberTipline report volume increased overall by 35%, reports for the EU decreased 47%. While the total number of files reported to the CyberTipline increased 30%, the number of files contained in EU-related reports decreased 58%. The EU’s share of all CyberTipline reports dropped from almost 5% in 2020 to less than 2% in 2021.

What have survivors said about it?

Survivors acknowledge the importance of privacy, but they note that online privacy advocates typically prioritize the privacy interests of platform users over the privacy interests of children, particularly victims of CSAM, who may not themselves even be users of the platform.

Some survivors—having suffered harm facilitated through online platforms—feel conflicted, not wanting governments to prohibit voluntary detection of CSAM but also not trusting online platforms to implement detection and reporting strategies effectively.

Opening Quote

Ensuring tech companies can monitor their own sites for CSAM is an essential child sexual abuse material prevention strategy. Advocacy agencies and tech companies need to be able to collaborate to identify perpetrators' existing strategies and emerging trends for distributing online CSAM.

- Survivor

What drives opposing viewpoints?

Opponents of voluntary detection of CSAM by online platforms tend to be strong advocates of user privacy. They question the effectiveness of technologies used for detection, dispute accuracy rates claimed by some individuals and organizations, raise concerns that detection technology can be used to censor content unrelated to child sexual exploitation, and point to false accusations being brought against innocent parties resulting from what they assert to be unreliable detection strategies.