AI deepfakes – how can we protect our children?

Author avatar

Brad Bartlett

|

Deepfakes

In a world where technology is rapidly advancing, parents face new challenges in protecting their children from emerging threats. One such concern is the rise of AI-generated deepfakes – synthetic media that can mimic real people’s voices, images, and videos with alarming accuracy.

Imagine receiving a distressing phone call from your daughter, pleading for help as dangerous criminals are kidnapping her. That’s exactly what happened to Jennifer DeStefano from Arizona in January 2023. Terrified and desperate to save her child, Jennifer listened in horror as a man threatened to harm her daughter if she didn’t send money immediately.

But here’s the shocking twist: Jennifer’s daughter was never in danger. She was safely enjoying a skiing trip, completely unaware of the traumatic call. The voice Jennifer heard wasn’t her daughter’s at all – it was a stunningly realistic AI-generated deepfake, crafted by scammers to extort money from unsuspecting parents.

Unfortunately, Jennifer’s story is not an isolated incident. As AI technology becomes more sophisticated, criminals are finding new ways to exploit it for financial gain, often targeting vulnerable families. Even members of Congress are taking notice, exploring how deepfakes can infringe on individual rights and cause serious harm.

As parents, it’s crucial that we stay informed about these emerging threats to protect our children and ourselves. Let’s take a look at the world of AI deepfakes—what they are, how they work, and, most importantly, what steps we can take to safeguard our families against their malicious use.

While the risks are real and concerning, our goal is to empower you with knowledge and practical strategies to navigate this new landscape with confidence.

What are AI Deepfakes?

At the heart of the deepfake phenomenon lies a powerful technology called generative AI. In simple terms, generative AI refers to artificial intelligence algorithms that can create new content – such as images, videos, audio, or text – based on patterns learned from existing data.

To understand how generative AI works, imagine a child learning to draw. As the child studies various examples of art, they begin to recognize common patterns, like how eyes, noses, and mouths are typically arranged on a face. With practice, the child learns to combine these patterns in new ways to create their own unique drawings.

Generative AI operates on a similar principle, but on a much larger scale. By analyzing vast amounts of data, these algorithms learn to identify and replicate patterns, allowing them to generate new content that mimics the original data with striking realism.

How Do These AI Models Mock Real People?

One of the most popular types of generative AI is called a Generative Adversarial Network (GAN). In a GAN, two AI models work together in a sort of competition.

One model, called the generator, creates new content, while the other model, known as the discriminator, tries to distinguish between the generated content and real examples. As the models iterate, the generator learns to create increasingly convincing content that can fool the discriminator.

You can imagine this would be a powerful tool for expanding the technology. But when applied to deepfakes, generative AI algorithms can quickly analyze real images, videos, and audio recordings of a person to learn the unique patterns of their facial features, expressions, movements, and voice. The AI then uses this learned information (like the child learning to copy and draw pictures) to synthesize new content that closely mimics the original person.

For example, to create a deepfake video, the AI algorithm might study footage of a celebrity’s face from multiple angles and in various lighting conditions. By learning the intricacies of the celebrity’s facial features and expressions, the AI can then generate new video frames that seamlessly replace the original person’s face with the celebrity’s likeness.

AI algorithms can analyze recordings of a person’s voice to learn the distinct patterns of their speech, including their tone, pitch, accent, and mannerisms. With enough training data, the AI can then synthesize new audio content that sounds uncannily like the original person speaking.

What Are The Potential Risks of Deepfakes for Kids and Teens?

As parents, our instinct is to protect our children from harm, both in the physical world and the digital one. However, with the rise of AI-generated deepfakes, we face a new set of challenges that can have serious consequences for our kids’ well-being, development, and even safety.

Exposure to Inappropriate, Misleading or Disturbing Content

As deepfake technology becomes more accessible, it’s easier than ever for bad actors to create fake videos, images, or audio that appear convincingly real.

Imagine, for instance, a child stumbling upon a deepfake video that appears to show their favorite celebrity engaging in explicit or violent behavior. Or picture a deepfake image that portrays a trusted public figure making false or inflammatory statements.

While adults may be able to see the content for what it is, children may become confused or upset as they see the unexpected from trusted characters.

Difficulty Distinguishing Real from Fake, Leading to Confusion and False Beliefs

As adults, we also struggle to discern genuine content from AI-generated fakes. A recent study found that adults surveyed could identify written text created by AI 57% of the time – but could only tell AI images apart from real ones 53% of the time.

For children who are still developing their understanding of the world and learning to navigate complex information landscapes, this challenge is even greater.

When exposed to deepfakes, children may have difficulty distinguishing between what’s real and what’s fabricated. Over time, repeated exposure to deepfakes may cause children to internalize false information or develop skewed perceptions of reality.

Potential for Cyberbullying, Scams, and Reputational Harm Using Deepfake Tech

Perhaps most alarmingly, deepfakes can be weaponized to target children directly. Bullies or abusers may use the technology to create humiliating or compromising fake content featuring their victims, such as placing a child’s face onto an adult’s body in an explicit image. AI has been used to create fake images of fellow classmates that damage reputations and cause serious harm.

Erosion of Trust in Media and Information Sources

Our kids already know a world where seeing is no longer believing – and with AI deepfakes taking over, children may develop a deep distrust of media and information sources.

As they struggle to distinguish real from fake, they may question the credibility of all content they encounter, even from reputable sources such as parents, teachers, and authority figures.

This erosion of trust can have far-reaching implications for children’s intellectual and social development. It may lead to a cynical worldview, where facts are constantly in doubt and opinions are shaped more by fear and suspicion than by evidence and reason.

Tips for Helping Kids Identify Deepfakes and AI-Generated Content

Sure, not all AI is bad! AI has made creating educational content on a wide range of topics easier than ever and has opened up new opportunities for creativity and expression. However, the rise of deepfakes and other AI-generated content means that parents and educators must be more vigilant than ever in helping kids navigate this complex media landscape.

Here are some tips to help kids identify and understand deepfakes and other AI-generated content:

Look for visual glitches and inconsistencies

Explain to kids that while deepfakes can be very convincing, they often contain subtle visual glitches or inconsistencies. While the AI is getting better at hiding these, it’s still fairly obvious in many high-profile scams and deepfakes.

Encourage them to look closely at videos or images for signs of unnatural movements, mismatched lighting, or blurring around the edges of faces. Point out examples of visual artifacts, like flickering or distortion, that can indicate a deepfake.

Pay attention to audio cues

Help kids understand that deepfake audio may sound slightly robotic, distorted, or unnatural. Encourage them to listen carefully for odd pronunciations, inconsistent voice pitch, or background noise that doesn’t match the visuals.

Emphasize the importance of considering the audio quality alongside the visual elements when assessing the authenticity of a video or audio clip. This can be harder for younger children, but older kids can easily tell when something feels “off” about a piece of content or media.

Check the source and cross-reference information

Teach kids to be cautious of content from unknown or unreliable sources. Encourage them to check if the content appears on reputable news sites or official channels – and if it’s too good to be true, it probably is!

Show them how to cross-reference information by searching for the same story or claim across multiple trusted sources. If a story or video can’t be verified elsewhere, it may be a deepfake or misinformation.

Foster a curious-yet-critical mindset

The best way to help children understand deepfake technology is to foster a curious mindset. Encourage kids to approach online content with a healthy dose of curious skepticism. Teach them to ask questions like:

  • “Who created this content and why?”
  • “What evidence supports this claim?”
  • “Could this be a deepfake or manipulated content?”

 

Help them understand that just because something looks or sounds real doesn’t necessarily mean it is. Reinforce the importance of fact-checking and verifying information before accepting it as true.

Stay informed about the latest deepfake technology

As a parent, stay up-to-date on the latest advancements in deepfake technology and the ways it can be misused. Follow reputable tech news sources and online safety organizations to stay informed.

When possible, try to share age-appropriate updates and examples with your kids to help them understand the evolving landscape of AI-generated content and its potential risks. This will help them feel better prepared to identify and handle any questionable content they come across online.

Teach Engagement With Curiosity – Not Fear

AI is here to stay, and it’s safe to say that our kids will probably grow up knowing more about it than us! But that doesn’t change the risks posed by nefarious actors who want to prey on vulnerable populations with powerful tools.

By teaching kids these practical skills and encouraging a critical mindset, we can help them navigate the world of deepfakes and AI-generated content with confidence.

Remember, the goal isn’t to scare kids or discourage them from engaging with technology but rather to empower them with the tools and knowledge they need to stay safe, informed, and in control of their online experiences. When we can help them develop a curiosity about technology and its capabilities rather than fear, we can set them up for success in the digital world.

Want to learn more about the latest technology and how to keep kids safe online? Check out our other guides and resources on Kidslox – and take the steps to create a safe and thriving online learning environment for your family.