Blog
Abuse Prevention in the Age of AI
Addressing Deepfakes and CSAM in Youth-Serving Organizations
In recent years, the rapid evolution of artificial intelligence has opened new frontiers of innovation and opportunity. But alongside these advancements are a darker and insidious risk: the use of AI-generated content, particularly deepfakes and other types of synthetic images, to create child sexual abuse material (CSAM). For youth-serving organizations committed to protecting the children in their care, this represents a new and urgent frontier of abuse prevention.
Through the lens of the Praesidium Safety Equation, the challenge of AI-enhanced exploitation is not just about keeping up with technology; it is about proactively applying prevention strategies across all operations to mitigate risk.
Here’s how organizations can respond through a comprehensive lens:
Expand Your Policies: Define Clear Rules About AI Use and Digital Content
Technology policies can no longer be limited to texting and social media. Youth-serving organizations must articulate clear stances on the use of AI-generated content and the capture, storage, use, and manipulation of images.
- Does your organization prohibit AI-generated exploitative content?
- Are consent and media use policies up to date?
- Is there a photo/video opt-out option for families?
Screening and Selection: Prioritize Digital Awareness in Background Checks
A core component of abuse prevention is understanding and limiting who has access to your consumers – and that may include access to their images or communication via electronic means. AI-generated CSAM can be created and distributed by adults or by peers. Offenders may never physically abuse a child, but may instead request, share, or exploit images online or use AI tools to victimize youth via manipulated images.
- Are behaviorally based questions asked about digital interactions with youth?
- Do you conduct social media screening when appropriate?
Monitoring and Supervision: Extend Oversight into Digital Spaces
While physical supervision remains critical, virtual environments now require equal attention. Left unmonitored, these spaces can become invisible venues for abuse.
- Are devices and platforms monitored for inappropriate content?
- Do IT protocols cover shared technology oversight?
Consumer Participation: Equip & Empower Youth to Recognize and Resist Digital Exploitation
One of the most alarming trends in this space is the use of AI tools by youth themselves to harass, embarrass, or exploit peers. A 2023 study by the Crimes Against Children Resource Center reported that among their respondents who had experienced online CSA, 88% of the abusive sexual imagery produced was made by other youth. Prevention must involve educating youth directly to understand, resist, and report digital risks.
- Are youth taught about digital boundaries and image misuse?
- Do you address online and digital abuse within your peer-to-peer prevention curriculum?
Reporting and Responding: Be Ready to Act Quickly and Legally
AI-generated CSAM is illegal, even if no real child was directly harmed in its creation. Organizations must be prepared to report swiftly and respond appropriately. It is also important to understand that the obligation to act applies even when incidents originate outside the organization’s physical premises.
- Do staff know how and when to report suspected AI-related abuse?
- Is your response plan updated for off-site and digital incidents?
A New Era of Risk Demands a New Level of Readiness
The misuse of AI to exploit or harm children is not theoretical—it is already happening. The 2024 AI CSAM Report Update from the Internet Watch Foundation reported as key findings that there is an increase in the incidence of AI-generated CSAM, the images are becoming “more severe”, and the technology is capable of generating not just images, but CSAM videos as well.
Just as the tools used to harm evolve, so too must our tools for prevention. By applying the Praesidium Safety Equation to this emerging risk, youth-serving organizations can continue to protect children across both physical and digital frontiers.
Because safety is not just about what happens in your buildings; it is about the systems you create to safeguard every space youth in your care may enter, online or off.
Are you a current Praesidium Client? Get exclusive access to our AI & Digital Exploitation Risk Prevention Checklist by contacting us at: info@praesidiuminc.com