Sexualized Deepfakes in UK Schools: Understanding and Preventing AI-Generated Image-Based Sexual Abuse Through Better AI Literacies

英国学校中的性化深度伪造:通过提高人工智能素养来理解和预防基于人工智能生成的图像的性虐待

阅读:1

Abstract

Responding to the lack of academic research on how young people are impacted by deepfake sexual abuse or how schools should address these issues, this paper explores levels of awareness of AI technology and sexualized deepfakes in UK schools and how schools are responding to these newly emergent harms. Drawing on interviews with students and teachers from eight schools across the UK, we found that teachers and students express uncertainty about how AI deepfake technology works. Some teachers underestimated how easy the technology is to use, and they lacked uniform comprehension that sexualized deepfakes should be treated the same way as non-consensual nudes, leading to inconsistency and variations in school responses. Students similarly lacked basic literacy about AI, equating AI with LLMs like ChatGPT, and even though sexualized deepfakes were occurring in their school contexts, students reported having received no explicit education on the topic. Educators and students connected sexualized deepfakes to a rise in misogyny via social media influencers, with some of the students and teachers calling for more education on AI, sexual violence, and consent at earlier ages. We advance the concept of AI-generated image-based sexual abuse, arguing that these harms should be understood as elements of technology-facilitated gender-based violence (TFGBV). We argue this framing is necessary to support systematic understandings of this issue and develop appropriate school responses. Our discussion offers recommendations for improving AI literacy, including preventative AI education that engages critically with AI harms and supports victims.

特别声明

1、本页面内容包含部分的内容是基于公开信息的合理引用;引用内容仅为补充信息,不代表本站立场。

2、若认为本页面引用内容涉及侵权,请及时与本站联系,我们将第一时间处理。

3、其他媒体/个人如需使用本页面原创内容,需注明“来源:[生知库]”并获得授权;使用引用内容的,需自行联系原作者获得许可。

4、投稿及合作请联系:info@biocloudy.com。