country flag

Briefly: Generative AI CSAM

Introduction to the issue and NCMEC's position

What is it?

Generative artificial intelligence (GAI) tools can create new content or alter existing content—including text, still images, and videos—based on input or prompts from a user. As GAI tools became increasingly available in 2023, their use to create fully or partially synthetic child sexual abuse material (CSAM) quickly increased as well.

NCMEC's Position:

AI-generated CSAM—whether partially or fully synthetic—is harmful and should be subject to the same prohibitions on production, possession, and distribution as non-synthetic CSAM.

Why does it matter?

In many jurisdictions, CSAM created by GAI tools (referred to as “GAI CSAM”) is already illegal, even if existing laws didn’t originally consider the development of GAI as a possibility. Such laws may include references to “a depiction which sexualizes children” (as in Norway) or an “indecent photograph or pseudo-photograph of a child” (as in the United Kingdom) as the basis for prohibiting GAI CSAM. Elsewhere, existing laws may be less clear or completely silent on GAI CSAM.

GAI tools can be used to create visually realistic sexually exploitative depictions of actual children by combining a real child’s face with a real or synthetic depiction of sexually explicit content; altering an innocent image of a child to depict sexually explicit content; or creating a GAI nude depiction (as various “nudify” apps are designed to do) of a real child. Some offenders use GAI CSAM as they use other CSAM—for sexual gratification. Financially motivated offenders have used GAI CSAM to extort child victims for monetary gain by demanding payment to prevent online distribution of GAI images depicting the child. Offenders also can use GAI CSAM as part of the manipulation or “grooming” process to normalize exploitative conduct and increase a child’s vulnerability to abuse.

In addition to these types of abuse, GAI CSAM is also harmful in the following ways:

  • Partially synthetic GAI CSAM is based on elements of images and videos of actual children. This original imagery might not have been sexually exploitative, but the manipulation of a non-exploitative image to create GAI CSAM sexually exploits the children involved.
  • Fully synthetic GAI CSAM (not based on a real child’s image) might have been created with a tool that was trained on real, or non-synthetic, child sexual exploitation content.* Visually realistic GAI CSAM strains law enforcement resources, which should be focused on efforts to identify, locate, and protect real children depicted in CSAM.
  • Whether partially or fully synthetic, GAI CSAM normalizes the sexual abuse and exploitation of children; has a low barrier to usage given its wide accessibility and ease of use; and promotes the exploitative motives of offenders.

*A December 2023 report by the Stanford Internet Observatory revealed that some GAI models were “trained directly on CSAM present in a public dataset of billions of images….” That report prompted efforts to remove CSAM from such training datasets and/or cease reliance on them.

What context is relevant?

Numerous GAI tools have proliferated since the technology became mainstream in 2023. Tech companies of all sizes have released GAI tools as standalone products and as additional features within existing services, which has dramatically altered how people interact with this technology. While GAI tools have proliferated, specific regulation and legislation to address these tools is far behind, which is facilitating the abuse of GAI tools by offenders seeking to sexually exploit children.

Even in jurisdictions where GAI CSAM is clearly illegal, investigations face several challenges, including how to reliably distinguish fully synthetic GAI CSAM (which does not depict a specific child in danger) from partial synthetic GAI CSAM from non-GAI CSAM. Prosecutions for GAI CSAM offenses have occurred in several jurisdictions, including the U.S., Australia, and South Korea.

What does the data reveal?

In 2023, NCMEC’s CyberTipline received approximately 4,700 reports related to CSAM or sexually exploitative content involving GAI. Of those reports, more than 70% were submitted by mainstream social media and online storage platforms. This indicates that most GAI tech companies are not reporting to the CyberTipline. Reports related to GAI CSAM increased to about 450 per month during the first quarter of 2024.

In October 2023, the Internet Watch Foundation published a report about its discovery of over 20,000 GAI images on a particular dark web forum. Analysts determined that about 3,000 of those images were criminal under then-current United Kingdom laws against the production, distribution, and possession of an “indecent photograph or pseudo-photograph of a child” or the possession of “a prohibited image of a child.” Of those, most were visually realistic images depicting children between 7 and 13 years old and more than 99% featured female children. A July 2024 update to that report found more than 3,500 new GAI CSAM files on the same forum, with more severe abuse increasingly common, and the emergence of GAI CSAM videos. IWF also noted increases in GAI CSAM on the “clear web” and GAI CSAM depicting known or identified victims of CSAM and “famous children.”

What have survivors said about it?

Survivors have expressed that legal arguments differentiating between “real” or “fake” CSAM are fundamentally flawed. Discussions about GAI CSAM must consider the harm done by creating CSAM, regardless of GAI’s role. Some survivors are particularly concerned that offenders’ use of GAI CSAM will fuel their sexual interest in abusing real children and will increase the threat those offenders present to communities. The use of GAI CSAM to manipulate or “groom” children and overcome their inhibitions to sexual conduct, making them more vulnerable to offenders, is also a concern.

Opening Quote

It is important to understand that the creation and existence of Generative AI CSAM exploits real human victims of child sexual abuse and child sexual abuse imagery. …[T]hat technology continues to innovate new ways to exploit child victims and survivors via AI is horrific and [it] should be criminalized on a global scale immediately.

- Survivor

What drives opposing viewpoints?

General opposition to legislative or regulatory restrictions on GAI tools are based on concerns about stifling innovation; limiting free expression; challenges related to the complexity of the technology; and creating barriers to entry for smaller companies with limited resources. Some have argued that stronger enforcement of existing laws that already address harmful conduct—harassment and defamation, for example—for which GAI might be misused is preferable to enacting new legal obligations. Notably, GAI CSAM is not clearly addressed in many jurisdictions, particularly where a legal framework requires proving that a depicted child is real.

Opponents of prohibitions targeting GAI CSAM argue that such measures are likely to be overbroad and impact lawful imagery and/or that children depicted in GAI CSAM are not “real” and therefore, this imagery causes no harm. Others have suggested—contrary to subject matter expert opinions—that GAI CSAM and synthetic sexual representations of children can serve a therapeutic purpose, providing offenders with sexual gratification without committing crimes against “real” children.