The Internet Watch Foundation has uncovered what they describe as “a world of text-to-image technology. You type in what you want to see in online generators and the software generates the image.” Their first investigative report on Artificial Intelligence (AI) and child sexual abuse material (CSAM) was published in early 2023 and was entitled, ‘How AI is being abused to create child sexual Abuse imagery’: Here are just a few of their findings (I encourage you to read the entire article):
- In total, 20,254 AI-generated images were found to have been posted to one dark web CSAM forum in one month.
- Of these, 11,108 images were selected for assessment were judged most likely to be criminal.
- Perpetrators can legally download everything they need to generate these images, then can produce as many images as they want – offline, with no opportunity for detection. Various tools exist for improving and editing generated images until they look exactly like the perpetrator wants.
- Most AI CSAM is now realistic enough to be treated as ‘real’ CSAM. The most convincing AI CSAM is visually indistinguishable from real CSAM, even for trained IWF analysts. Text-to-image technology will only get better and pose more challenges for IWF and law enforcement agencies.
The ability to protect our children from online predators is becoming almost impossible as technology escalates at the speed of light. It is like trying to grab and hold onto raging flood waters. And we are witnessing a tsunami in the realm of Artificial Intelligence. In fact, this tsunami of AI-generated CSAM material within pedophile online forums or message boards has been described as a “predatory arms race”.
In an interview with ABC news, Ian Critchley with the National Police Chief’s Council stated,
We are seeing children groomed, we are seeing perpetrators make their own imagery to their own specifications, we are seeing the production of AI imagery for commercial gain – all of which normalizes the rape and abuse of real children.
The massive demand for Child Sexual Abuse Material (CSAM), and the ease in producing this disgusting material using AI, enables the perpetrator to use the faces of children who have been previously exploited, even years ago, to be re-exploited when their faces are used and the images made to fit what the perpetrator wants, to fit an exact act, to fit explicit requests made today; using existing content to create new images, new content.
It isn’t limited to these online forums or formerly exploited children. There is a growing concern that children are using AI on their peers. Since early November 2023, Dorota Mani and her 14-year-old daughter, Francesca, have appeared in multiple interviews for major news outlets, like Fox News, CNN, and MSN Francesca and several other girls in her school, Westfield High, NJ, were exploited when another student used their photos, faces, to design nude images with AI. Can you imagine how devasting this would be if it were it your daughter or son? How will this impact teen suicides? Fortunately, in this case Francesca felt safe talking to her mother about the incident. Both the mother and daughter are turning this tragedy into activism. Not only have they shared their story with the major news outlets to educate the public, but they have also petitioned legislators and President Biden to enact laws to regulate AI abuses.
Another horrible ramification is the impact on law enforcement. Efforts to attempt to find and help these children will take a huge amount of time and resources, only to find out the victims were virtually created.
What will happen when AI is combined with virtual reality? The internet is already saturated with sites promoting the promises of virtual reality. In April 2018, Peter Rubin wrote a piece for WIRED, describing how pornography in virtual reality makes the participant actually feel they are participating. There is much money to be made – and many lives destroyed along the way.
The time to act to control AI was yesterday. We must act now. Contact your state and federal legislators and demand action against the use of AI in developing pornography and CSAM. Immediately report any actions that involve children in your sphere of influence, file a police report, contact the national hotline (1-888-373-7888 , Text* 233733) and follow up to ensure actions are taken to investigate the abuse. Educate yourself, your child, their school, and others in your sphere of influence. Most important, communicate with your child – your home, your relationship must be a safe haven for them to come to when they are afraid, confused or questioning.