What is it?
Generative artificial intelligence (GAI) tools can create new content or alter existing content—including text, still images, and videos—based on input or prompts from a user. A variety of measures can be implemented to prevent GAI tools from creating child sexual abuse material (CSAM) and other exploitative content, but the deployment of such measures has not been mandated by law or regulation or universally adopted on a voluntary basis.
Available measures to support prevention efforts include:
- incorporating “Safety by Design” principles;
- subjecting GAI tools to simulated real-world efforts to create harmful material (a practice known as “red teaming”) in order to remove the ability of GAI tools to create such content;
- proactively monitoring training datasets to prevent—and remove as necessary—harmful content;
- participating in collaborative safety initiatives with other industry and child protection organizations;
- deploying text classifiers to detect and block user prompts intended to produce exploitative content;
- deploying image classifiers to detect and block exploitative image and video outputs;
- reporting exploitative prompts and content to NCMEC.
NCMEC's Position:
Services capable of using generative artificial intelligence to create or modify imagery should implement measures to prevent the output of GAI CSAM and other child sexual exploitation content.
Why does it matter?
The production and distribution of CSAM is a serious criminal offense that causes significant harm to victims and promotes the sexual abuse and exploitation of children. Recent advances in GAI technology have enabled offenders to create original, or modify existing, CSAM in several ways. An offender can prompt a GAI tool with descriptive text inputs to create CSAM. Existing non-explicit imagery can be modified by automated or user-guided GAI tools to create an image of a real child nude or engaging in sexually explicit conduct. The resulting photo-realistic CSAM exploits children, creates additional challenges for reporting hotlines and law enforcement, and diverts resources from investigations seeking to rescue real children suffering ongoing harm.
Some GAI models also have been used to generate text instructions on how to abuse and exploit a child, avoid detection, destroy evidence, and take other actions in support of criminal activity involving sexual exploitation of children.
Opportunities exist to use technology to prevent the creation of GAI CSAM and other exploitative content. Allowing GAI technology to be created and marketed to the public without implementing robust efforts to prevent creation of GAI CSAM, whenever possible, amounts to facilitating the creation of such content and knowingly putting children at risk for sexual exploitation.
What context is relevant?
The recent and rapid proliferation of GAI tools in the absence of a legal or regulatory framework has created a landscape in which some services are closed-source (tightly controlled by major companies as proprietary products), while others are open-source (allowing anyone to access, download, and modify the source code). Companies that operate closed-source services, even absent legal or regulatory obligations to do so, have substantive business reasons to build safety measures into their GAI tools, and many have done so. While closed-source services currently may be more widely utilized by the public, open-source models—which can be downloaded, modified, and deployed without needing to communicate with a centralized server or service—are much more challenging to address.
CSAM offenders have communicated on dark web forums to teach, and learn from, each other how to use open-source GAI models to create CSAM. Researchers have documented publicly accessible forum messages in which users discussed sharing CSAM, downloading open-source GAI models, training models on non-synthetic CSAM, and other strategies to manipulate GAI tools to create GAI CSAM.
In 2024, tech non-profits Thorn and All Tech is Human collaborated with eleven GAI developers to publicly commit to Safety by Design principles to “guard against the creation and spread of AI-generated child sexual abuse material…and other sexual harms against children.” A formal whitepaper accompanied the public announcement to outline the principles and share strategies for how various stakeholders can implement them.
What does the data reveal?
In 2023, NCMEC first began tracking CyberTipline reports that involved GAI CSAM or other sexually exploitative content related to GAI, finding approximately 4,700 such reports. During just the first half of 2024, NCMEC identified more than 7,000 GAI-related CyberTipline reports, including more than 20,000 GAI CSAM files.
What have survivors said about it?
Survivors stress the importance of prevention and highlight the opportunities to adopt prevention measures within GAI models. Some survivors express frustration and anger over the seemingly rapid development and rush to market text-to-image GAI models without appropriate safeguards to prevent the creation of CSAM. Solutions focused on prevention, rather than surveillance and reporting, are likely to garner more widespread support in the face of privacy concerns.
Prevention is vital because it addresses the issue at its source, minimizing potential harm before it occurs. In the context of generative AI, prevention measures help stop the creation and dissemination of harmful content in the first place. The existence and spread of AI-generated abusive images can cause significant psychological harm to victims and survivors, who may feel violated knowing such images are in circulation. Even if they are not real, they can still perpetuate and normalize harmful behavior and attitudes towards children.
- Survivor
What drives opposing viewpoints?
Perspectives specifically opposed to CSAM prevention measures may be marginalized by taking such a position. General opposition to restrictions on GAI development may include arguments that such regulation would stifle innovation, create economic costs that smaller companies might not be able to bear, or infringe on freedom of expression.