In a groundbreaking investigation, the Internet Watch Foundation (IWF) has shed light on the alarming trend of child sexual abuse material (CSAM) being generated by artificial intelligence (AI). The IWF’s research report provides crucial insights into the use of AI technology to create explicit content involving minors, posing significant challenges for law enforcement and child protection agencies.
The study reveals that AI-generated images are produced using text-to-image technology, where users input descriptions and the software generates corresponding images. This technology is efficient and highly accurate, often producing images that are virtually indistinguishable from real photographs. With the capability to generate multiple images simultaneously, the only limitation is the speed of the computer being used.
To determine the scope of the issue, the IWF identified 20,254 AI-generated images posted on a dark web CSAM forum within a one-month period. Out of these, 11,108 images were selected for assessment, with the remaining images either not featuring children or being non-criminal in nature. A team of 12 IWF analysts dedicated a combined total of 87.5 hours to assess these images, leading to the identification of 2,562 criminal pseudo-photographs and 416 criminal prohibited images.
The report highlights that while AI-generated CSAM currently represents a small portion of the IWF’s workload, its potential for rapid growth is a cause for concern. Perpetrators can legally obtain the necessary tools to produce these images offline, making detection extremely challenging. Moreover, AI CSAM has become increasingly realistic, making it difficult even for trained analysts to distinguish between AI-generated and real CSAM.
Disturbingly, the report also reveals that AI-generated CSAM has led to the re-victimisation of known victims of child sexual abuse, as well as the victimisation of famous children and those known to the perpetrators. The IWF has encountered numerous instances of AI-generated images featuring identifiable victims and well-known individuals. Additionally, the commercial exploitation of AI CSAM has also emerged as a new avenue for perpetrators to profit from child sexual abuse.
The report urges authorities to consider the criminalisation of the creation and distribution of guides for generating AI CSAM, as well as addressing the legal status of AI CSAM models. It emphasises that while the misuse of AI technology represents only a fraction of its potential benefits, proactive measures must be taken to mitigate the growing threat it poses.
As computer technologies continue to advance, including generative AI, it is crucial to recognise the unique challenges posed by AI-generated CSAM. The ability to generate images offline and at scale has the potential to overwhelm efforts to combat online child sexual abuse, diverting resources from tackling real CSAM cases.
Looking ahead, the report warns that AI technology’s image quality will only improve, with the potential for the emergence of realistic full-motion video content. The first instances of short AI CSAM videos have already emerged, indicating an alarming trend that is expected to intensify.
Addressing the issues surrounding AI-generated indecent images is crucial not only to combat the current problem but also to prepare for the future proliferation of AI-generated video content. Swift action is necessary to develop effective models that can counter the growing threat posed by AI CSAM.
As the IWF continues to monitor and combat the abuse of AI technology, it remains clear that collaborative efforts between law enforcement agencies, technology companies, and policymakers are vital to safeguarding children and preventing the further spread of AI-generated CSAM.