SCHOOL EXPELS GIRL SEXUALLY ABUSED BY AI DEEPFAKES
9 policy fixes schools need to protect our children.
According to recent reports:
AI-generated nude deepfakes have appeared in documented school incidents across multiple districts. A national survey cited by Education Week reports that 1 in 8 young people (ages 13–20) say they know someone targeted by AI-generated deepfake nude imagery, and 1 in 17 say they have personally been targeted.
Roughly 1 in 10 minors say they know peers who have used generative AI tools to create non-consensual nude images of other minors.
There have been multiple incidents in which students used AI tools to fabricate sexually explicit images of classmates and distribute them through social media platforms.
Schools often resort to disciplinary codes that were not designed for this class of harm.
Educators acknowledge gaps in preparedness and policy clarity when responding to synthetic-image harassment.
In one recent incident, despite one young victim’s repeated pleas for help:
Her school authorities struggled to locate the images and expressed skepticism about their existence.
Students continued to spread the AI-generated deepfake nude images of her.
Her school expelled her when she fought back against her abusers.
Paid Subscribers Get:
Complete story of one middle school girl who was expelled for fighting back.
Deep dive into AI’s role in enabling her abuse.
Policy recommendations for schools and AI providers to defend our children from AI-generated sexual abuse.
Keep reading with a 7-day free trial
Subscribe to Strict Quality AI ™ to keep reading this post and get 7 days of free access to the full post archives.


