Machine learning is going to radically change product design. But what is the future of machine learning? Is it the singularity, flying cars, voiceless commands, or an Alexa that can actually understand you? Before we can even get to that part–the grand futurism part–I want to offer a provocation: Machine learning won’t reach its potential–and may actually cause harm–if it doesn’t develop in tandem with user experience design.
Machine learning refers to different kinds of algorithms that learn from inputs like human interaction or data and create evolving feedback over time from that input. It can use preexisting data to create predictions or create new kinds of connections or pattern within data sets. If this sounds complicated, well . . . it is! Machine learning creates opaque and hard to understand systems using data and technology. It can be hard to predict results from machine learning, especially if there isn’t a lot known about the data set or the algorithm being used.
This is where design is key. UX and product design take the capabilities, ideas, and policies of an idea or a solution and turn it into a usable experience that lets consumers understand what that product is doing and how it does it.
Google “professional hair” and see the results. It’s a lot of very typical white hairstyles. Google “unprofessional hair” and you’ll see mainly black hair styles. Professor LaTanya Sweeney of Harvard’s Data Privacy Lab did a study showing that if you search black-sounding names, Google pulls up sponsored ads related to arrest in the search results. Sweeney Googled her own and it brought up questions of “LaTanya Sweeney arrested?” Professor Sweeney has no criminal record. ProPublica has reported on predictive justice and policing integrated into courtrooms. Algorithms, and machine learning, can have erroneous and incredibly biased results that can hurt people.
The future of machine learning is coming up with a hybrid language that bridges design and engineering, with a focus on ethical and causal effects for consumers. Johanne Christensen, PhD candidate at NC State University in computer science, focusing on UX and machine learning, articulates the problems with algorithms for users thusly: “When users don’t understand how an algorithm gets its results, it can be difficult to trust the system, particularly if the consequences of incorrect results are detrimental to the person using it. Transparency communicates trust.”
But how that transparency is articulated to users is a design challenge, and it requires designers to understand data. Here’s an idea that’s purely an illustrative example: If you are creating an app to help recommend fitness suggestions based on health data, you need to know what kind of health data you are using. You want to know where the data comes from, how old it is, and how many different kinds of body types, ages, people, and locations are in your data set.
These aren’t details designers typically concern themselves with, but they need to. The product you are building uses a specific kind of algorithm and how that algorithm responds to a specific data set is a design effect–whether or not you intend it, and whether or not you know what the outcome will be.
Caroline Sinders is a machine learning designer, user researcher, artist, and digital anthropologist.