It’s no secret that algorithms are incredibly problematic, leading to everything from racist policing to sexist hiring. But even for adults who are extremely online, it can be hard to understand what exactly goes into an algorithm and how to make it fairer. For pre-teens, it can be even trickier to grasp the concept and its real world implications.
That was the idea behind The Most Likely Machine, a project from design studio Artefact that aimed to help students better understand just how algorithmic bias works. Its clever concept and streamlined interface helped make it the winner of the learning category in this year’s Innovation by Design Awards.
Even before COVID-19 hit, forcing many students into remote learning, the Artefact team had been thinking about how to create a tool to help kids with digital literacy, particularly around algorithms and artificial intelligence. The abrupt shift to all online classes last March only crystalized the need for this kind of a resource that could provide more transparency into the technology that students were constantly using.
The Seattle-based studio focuses on education, healthcare, and social impact, and while much of its work is done for clients, it leaves room for “passion projects,” according to Matthew Jordan, a partner at the firm. “They’re a way for us to inspire the industry to think about preferable futures like the one The Most Likely Machine aims to create,” he says. “Where students have access to tools to help them become informed digital citizens and understand the biases in the technology around them.”
In this browser-based program, students learn about algorithmic bias through the lens of a fictitious middle school that’s giving out awards: Most Likely to Go to a Top University, Most Likely to Go Viral, and Biggest Troublemaker. “[We wanted to] create an environment that they could relate to and imagine themselves in,” says Andrea Kang, the lead design strategist on the team. “They understood what was trying to be accomplished, and that created a level of engagement.” The site design itself underscores these goals: The pages are crisp and streamlined, with bright colors and simple icons; the instructions are clear; and the process is entirely user-controlled. This too is noteworthy, as you feel a real sense of autonomy moving through each step.
Participants start by assigning awards to famous historical figures like Albert Einstein, Marilyn Monroe, and Rosa Parks. Students also assign a series of attributes to each award (adaptable, adventurous, aggressive, etc.) and rank those attributes in order of importance. Then the machine goes to work, spitting out its own choice for each award.
The project uses the differences between the students’ choices and the algorithm’s to explain how bias is built into this seemingly silly process of choosing middle school awards. It then takes it a step further, showing how algorithms can have serious, real-world consequences. For instance, in the UK, a nationwide test was cancelled because of COVID-19. Initially, the plan was for teachers to decide each student’s test scores, but the government decided an algorithm would be better. It ended up downgrading 40% of the test scores. “What if an algorithm predicted you would fail a test before you even took it?” one of the follow-up prompts asks, homing in on an issue all too real for students and showing how algorithmic bias can hit close to home.
To develop the project, Artefact did two rounds of testing with nine students ages 10 to 14. (Jordan says they recruited the students through their own internal networks.) In the first round, “We had students suggesting, ‘Oh algorithms are so awesome! We should use them for our national electoral system,'” Kang says. “We were like, ‘Hold. Pause. We’re not trying to teach you to fully trust algorithms. We want you to think more critically about what’s put into the algorithm.'” They reconfigured the process, adding some more steps to help students better understand how bias gets built in.
The goal of the project is that students can use it autonomously, exploring different outcomes and moving at their own pace. In that sense, COVID-19 helped them fully actualize the end result. “We were looking into educational tools and thinking about digital literacy and algorithmic literacy and artificial intelligence,” Kang says. “A lot of that material is done in a classroom or with an educator or caregiver present. In the era of distance learning, we realized there was a gap. A lot of the inspiration came from wanting to fill that gap and create an experience that gave full autonomy to students.”
Still, the tool readily invites discussion, and Kang says that a pilot program at an in-person elementary school in Tacoma, Washington, showed just how rich those conversations could be. To that end, Artefact is in touch with several nonprofits about using the tool, including a Canadian organization that wants to tailor it with Canadian historical figures. As the Most Likely Machine is open-source and adaptable, Artefact encourages this.
For Artefact’s designers, this project gave them an opportunity to showcase a mission-driven tool, which isn’t always the case with client work. They thought a lot about the particular demographic they were targeting—preteens—and what would resonate, not just on a visual level but in terms of their life experiences. “This is when preteens are developing their own identities online and getting their own devices,” says Jordan. “Helping them be more understanding and responsible citizens in the digital world is definitely something that we’re excited to apply to the other work we do.”
See more from Fast Company’s 2021 Innovation by Design Awards. Our new book, Fast Company Innovation by Design: Creative Ideas That Transform the Way We Live and Work (Abrams, 2021), is on sale now.