Jennifer LaGarde and Darren Hudgins
Three children looking at their phones

When it comes to media literacy and our collective inability to tell fact from fiction online, there’s a lot of finger-pointing going on.

Take, for example, the widely shared 2016 study from the Stanford History Education Group (SHEG), which revealed (among other things) that nearly 82% of middle school students surveyed couldn’t identify sponsored from editorial content. Not only did the headlines around this study rightfully include words like “shocking” and “bleak” but they also left adults, and educators in particular, sighing and shaking their heads. I mean, these kids today, they can post a selfie to Snapchat while simultaneously ordering a smoothie from Uber Eats and sharing a 15-second video of their cat on TikTok, but they can’t tell the difference between a groundbreaking video of a giant squid, from a hoax website devoted to the Pacific Northwest Tree Octopus. #amirite?

But then, in early 2019 another study reported that people over 65 shared nearly seven times as many articles from fake news domains on Facebook as those in the youngest group surveyed, and suddenly, the tables were turned. The fact that Facebook remains one of the least popular social networks among young people didn’t stop generation Z from looking up, ever so briefly, from their phones, to collectively roll their eyes at the 65+ crowd.

Although neither of us is quite old enough to fit into the demographic of Facebook’s most notorious oversharers, we both DO remember a time when teaching media literacy seemed far less complicated. Back in the day, when Jennifer completed National Board Certification, one of her entries featured a lesson on website evaluation that focused a great deal of time on Boolean search techniques and the merits of analyzing website domains. (Hint: .gov = good and .com = bad). If, indeed, these were ever effective strategies for evaluating online information, those days are long gone. Not only has the amount of information we consume on a daily basis increased exponentially, but the tools, their operating systems (OS) and the algorithms that control both have also undergone radical transformations.

According to the Pew Research Center, roughly two-thirds of Americans report getting their news from social media. This alone should be a significant finding for those of us who are in the business of helping young people navigate news and information. But when we dig a little deeper, the implications grow: Pew also found that in addition to being the second-most-popular search engine in the world, Youtube is ALSO the most popular social media source for news. (This is really important, especially when we consider how many students today aspire to be YouTubers.) That said, while Facebook continues to appear at the top of that list, it’s also true that over 100 million hours of video are watched on Facebook every day. And since much of that is cross-posted from YouTube, it’s clear that Americans of all ages are increasingly turning to video as one of their most trusted forms of information — including news. And that’s cool, because seeing is believing, right?  

Well … no. Not exactly.

By now, most of us are aware of the existence of “deepfake” technology, the software that creates realistic videos depicting people saying and doing things they never actually said or did. What’s more, as deepfake technology becomes more advanced, the impostor videos are nearly impossible to detect. Of course, people have been able to manipulate video for a long time, but in the past this has required a unique skill set and very expensive tools. And while deepfake technology isn’t yet as prevalent or accessible as the photo editing apps we all use to remove a few wrinkles or add a set of bunny ears to our selfies, it wasn’t that long ago when those tricks were only possible for those who both understood and could afford what was then considered to be highly specialized software, like Photoshop. Indeed, in just a short period of time since we published Fact VS Fiction, access to deepfake technology has grown considerably. And with the ever-increasing number of YouTube channels devoted entirely to teaching folks how to master the art of the face swap, we have to wonder how long it will be before we’re all taking our jibjab game to the next level.

Read ISTE's Fact Vs. Fiction.

But let’s be honest, it doesn’t take masterfully edited video content to fool us. In fact, some of the most recent viral videos to spark fear and outrage (before eventually being debunked) didn’t employ slick editing software at all. Rather, these videos played on our biases and anxieties to garner swift, even violent, reactions. By the time additional information revealing a more nuanced accounting of each story became available, positions had been taken, beliefs entrenched. And given that fabricated information posted online travels six times more quickly than its factual counterparts, according to a 2018 MIT study, it’s very likely that many people never even encountered later, more well-rounded versions of the original stories.

So where does that leave us? How do we teach our kids to tell fact from fiction when seeing is no longer believing? As with most things related to social media, the internet and media literacy, it’s complicated, but not impossible. The key is to not look for easy, foolproof solutions. This problem is big and complex, it’s going to take more than a mnemonic device to tackle it. And while we like some of the tips posted here for spotting a deepfake video, we also recognize that as the technology improves, these technical tells will be more difficult to identify. In fact, the more we dig into the role video plays in the spread of false information, the more convinced we are that arming students with tools to keep them from being swept up in content that turns out to be false should focus less on the videos themselves and more on recognizing and managing our own emotions and behavior. Here’s what we mean through six R’s:

  1. Recognize triggers: How we feel about the content we consume can tell us a lot about whether or not it’s been designed to manipulate us. We should all be suspicious of videos that trigger swift, passionate and often negative responses, such as fear or outrage. Those raw and strong emotions should be a red flag. And when those warning signs appear, the next steps should be to press pause, back away from the keyboard and then come back to it once we’ve considered some additional questions.
     
  2. Retrace the outrage: If a video sparks anger toward a specific person or group, it’s helpful to consider who benefits from that outrage. This can help us trace a video to its original source and spot a (not so) hidden agenda of manipulation.
     
  3. Reflect on your own biases: We all have opinions and biases related to what’s happening in the world around us. Just because something confirms what we believe to be true, doesn’t mean it shouldn’t be verified. Especially if there are other warning signs that should make us suspicious of the content.
     
  4. Rethink what “going viral” means: Although it’s come to mean something very different in recent years, the origin of the word viral stems from what happens when an infection of virus spreads — both of which we’d all like to avoid. Viral content online should give us similar pause. If a video is suddenly everywhere —  going from zero to millions of shares in a matter of minutes — it’s critical that we teach our kids to ask themselves why, and then to determine whether the content is actually newsworthy or if emotion is driving the speed at which it’s being shared.
     
  5. Resist the urge to be first: When we see something incredible online, it’s tempting to share it right away — especially when it appears that we’re the first person in our networks to discover the content. But being first to share, what ultimately turns out to be, false content, is nothing to be proud of. While the terms of service for many social media platforms prohibit our students from joining until they reach a specific age, we all know that a lot of our kiddos haven’t read (or choose to ignore) the fine print. Teaching them to take the time to understand and evaluate the information they endorse (with their likes, shares and reposts) will make them more credible members of the communities they join. What’s more, as educators, we should be modeling this practice in our own networks.
     
  6. Revisit common sense: As the saying goes, if it’s too good (incredible, outrageous, etc.) to be true, it probably is. In the words of Hany Farid, a digital forensics and image analysis expert from the Dartmouth College:

“Before you hit the like, share, retweet, etc., just think for a minute. Where did this come from? Do I know what the source is? Does this actually make sense? Forget about the forensics, just think.”

Learning to recognize how our emotions can be used to manipulate our behavior is a skillset that will continue to serve our students as information consumers no matter how advanced the technology used to fool us becomes. That’s not to say, of course, that there aren’t some practical tips that can also aid our kids in identifying false or hoax video content. We think some important skills in this category include:

  1. The ability to identify a primary from a secondary source in a social media context. The person posting the video is often not the person who created it. We have to help our learners develop the skills for following a video’s trail back to its original source, keeping in mind that content that cannot be traced to a primary source also cannot be trusted.
     
  2. Determining the publication date vs. the posting date. These are two different things, but often the latter is mistaken for the former. Resurrecting old stories to spark new outrage is a common tactic used by those who are keen to mislead the public. Identifying when a video was first published is an important step in determining credibility.
     
  3. Understanding perspective. All videos reflect the (physical or philosophical) point of view of the person holding the camera. Showing kids how video perspective and camera angles can be used to manipulate information is great! But having them create their own videos (using handheld devices or even drones) in which they change perspective to control what the viewer sees is even better!
     
  4. Triangulate. Triangulate. Triangulate. When it comes to evaluating online content of any type, including video, this is still one of the most powerful tools in our arsenal. Helping kids develop the habit of seeking out other known and credible sources for the same information is a critical step in cultivating their roles as digital detectives. In order for this to become a true habit, however, and not just something compliant students do for specific assignments at school, we must model it in our own practice: Every time we pull up a video to show in class, we should be modeling the vetting process.  

In a world that seems more divided than ever and in which the forces that benefit from that division are only becoming more and more skilled at fooling us, one thing that most of us agree on is that false content online is a major and destructive problem. And since so much of that content is now being shared as video, our existing media literacy efforts must evolve to include this content.

This is a big job, but the good news is you don’t have to do it alone. Before you set out on this hero’s journey all by yourself, we recommend you recruit some help. An optional workshop for teachers to help them become better digital detectives themselves is a good place to start. Follow that with an invitation to parents to join you in an after-school event to teach them how to model the practice of evaluating information for credibility at home. The more adults our young people see modeling both a disposition of skepticism and the steps required to vet information before sharing it, the better. After all, while the adage about it taking a village to raise a child may have become a bit cliche, in our digital world, it’s never been more true. In the end, if we want our learners to develop the skills to slay misinformation in its forms, we need to gather our village and get to work. Good luck! And … let us know if we can help!

Jennifer LaGarde is a lifelong teacher and learner with over 20 years in public education and co-author of Fact VS Fiction: Teaching Critical Thinking In The Age of Fake News. Her educational passions include leveraging technology to help students develop authentic reading lives, meeting the unique needs of students living in poverty and helping learners (of all ages) discern fact from fiction in the information they consume.

Darren Hudgins is the CEO of Think | Do | Thrive and co-author of Fact VS Fiction: Teaching Critical Thinking In The Age of Fake News. In both capacities, he works with educators, school leaders, districts and school organizations to help them build experiences that promote thought, play and innovation to strengthen their human capacity needs, drive action and inspire the souls of social servants so students thrive. Learn more about his 20-plus years in education on his website

Learn more about ISTE 2019