The internet can be a disturbing, destructive place, especially for children. In just a few clicks, kids can find explicit images, racist and sexist comments, or crazy conspiracy theories; they can become the targets of sexual predators pretending to be friends; and people they know can use the internet to bully them. That’s why in 2017, Google launched an internet literacy program called Be Internet Awesome that aims to teach kids ages 7 to 11 about how to stay safe and secure online.
Today, Google is launching a new set of lessons for Be Internet Awesome that focuses on media literacy. This includes six lesson plans designed for educators to use in the classroom with a variety of activities designed to teach children how to spot if a website is fake, to understand the way that images and videos can be edited to leave out important context, and to be able to analyze how captions and text can change the meaning of different types of media. Developed by a group of educators in partnership with experts like Anne Collier, the executive director of the Net Safety Collaborative, the new lessons further Be Internet Awesome’s earnest effort to turn children into savvy internet consumers at a time when fake news abounds and bullying is happening in places that teachers and parents can’t supervise.
Be Internet Awesome is already taught in all 50 states, and it’s available in nine different languages. In fact, the project’s new media literacy lessons came out of requests from educators, who wanted ways to teach kids about misinformation in particular. Jessica Covarrubias, the program lead for Be Internet Awesome, says that the curriculum is designed to help kids navigate a range of situations on the internet, through discussion questions included for teachers and hands-on activities to get students thinking about how they would act in real-world situations. Everything is available online via the Be Internet Awesome website, as well as in paper form for classrooms where students don’t have laptops.
One lesson called “Frame It” prompts students to think about the people who post videos, photos, and other visual media online, and how what those people decide to include or exclude can change the meaning of content. The curriculum instructs kids to take an index card and cut a rectangle out of the center to make their own frames. Then, the child can zoom in and out with their frame, showing them how the frame helps a media maker decide what to include and what to leave out. An accompanying handout shows images with two different frames: For instance, one frame shows a pirate ship sailing on a blue ocean, but the image next to it reveals that the blue ocean is actually a child’s bathtub.
Another activity called “Is that really true?” helps students decide if information is credible based on who’s sharing it with them and the situation in which it was shared. A third lesson helps students pay close attention to assess what’s different about one image versus another, which provides an avenue to discuss the difference between real and fake news stories. The curriculum also includes a handout with a list of real and fake news website URLs, and students are supposed to discern which is which.
Most critically, Be Internet Awesome acknowledges that kids aren’t always going to know how to handle a situation that makes them uncomfortable. That extends to what kids might see on YouTube. Google owns YouTube Kids, the kids-centric section of YouTube that has come under fire in recent years for showing violence and sexually suggestive situations on autoplay and feeding videos of children to pedophiles. (According to The Washington Post, the FTC is in the final stages of an investigation into how YouTube has violated children’s privacy.)
In this sense, Google, as YouTube’s parent company, has every incentive to put the onus on kids to monitor their own behavior. Covarrubias referenced Be Internet Awesome’s “Brave” pillar, which teaches kids about when they should speak to an adult about content they’re seeing that makes them uncomfortable and shows them how to block or report inappropriate content. While most of the situations the curriculum lays out have to do with bullying, one appears to directly address the YouTube controversies: “You’re watching a cartoon video and all of a sudden there’s some weird content in it that’s definitely not appropriate for kids and makes you feel uncomfortable. Do you report it or not?”
According to Be Internet Awesome, teachers are supposed to encourage students to report content in situations like this. That means that reporting mechanisms have to actually work. Google has to ensure that the inappropriate content will be taken down—something it outsources to a mix of machine learning algorithms and moderators that don’t always get it right. After New York Times reported YouTube’s recommendation algorithm showing children violent videos, Google hired thousands of moderators to better uphold its policies, and the company is considering moving all of its kids’ content to a separate app. Parents can now set controls within YouTube Kids that limit what their children can see to only channels that have been reviewed by human moderators, as well. According to the Wall Street Journal, some YouTube employees don’t think this goes far enough. They think YouTube needs to turn off the autoplay function that has fed kids inappropriate videos in the past.
Because of YouTube’s algorithmic problems, Be Internet Awesome is something of a stopgap measure. If Google can’t guarantee to parents that their children won’t see inappropriate content on YouTube Kids, then at least it can help teach kids what to do when the media they’re seeing does make them feel uncomfortable.
“Honestly, it’s the nature of YouTube, right? It’s user-generated content and it continues to grow,” Covarrubias says. “And so we’re trying to also learn how to grow our systems in place with it.”