Trauma from AI deepfakes can be particularly harmful
AI deepfakes are different from traditional bullying because instead of a nasty text or rumor, there is a video or image that often goes viral and then continues to resurface, creating a cycle of trauma, Alexander said.
Many victims become depressed and anxious, he said.
“They literally shut down because it makes it feel like, you know, there’s no way they can even prove that this is not real — because it does look 100% real,” he said.
Parents are encouraged to talk to students
Parents can start the conversation by casually asking their kids if they’ve seen any funny fake videos online, Alexander said.
Take a moment to laugh at some of them, like Bigfoot chasing after hikers, he said. From there, parents can ask their kids, “Have you thought about what it would be like if you were in this video, even the funny one?” And then parents can ask if a classmate has made a fake video, even an innocuous one.
“Based on the numbers, I guarantee they’ll say that they know someone,” he said.
If kids encounter things like deepfakes, they need to know they can talk to their parents without getting in trouble, said Laura Tierney, who is the founder and CEO of The Social Institute, which educates people on responsible social media use and has helped schools develop policies. She said many kids fear their parents will overreact or take their phones away.
She uses the acronym SHIELD as a roadmap for how to respond. The “S” stands for “stop” and don’t forward. “H” is for “huddle” with a trusted adult. The “I” is for “inform” any social media platforms on which the image is posted. “E” is a cue to collect “evidence,” like who is spreading the image, but not to download anything. The “L” is for “limit” social media access. The “D” is a reminder to “direct” victims to help.
“The fact that that acronym is six steps I think shows that this issue is really complicated,” she said.


























