Machine learning is going to radically change product design. But what is the future of machine learning? Is it the singularity, flying cars, voiceless commands, or an Alexa that can actually understand you? Before we can even get to that part–the grand futurism part–I want to offer a provocation: Machine learning won’t reach its potential–and may actually cause harm–if it doesn’t develop in tandem with user experience design.
Machine learning refers to different kinds of algorithms that learn from inputs like human interaction or data and create evolving feedback over time from that input. It can use preexisting data to create predictions or create new kinds of connections or pattern within data sets. If this sounds complicated, well . . . it is! Machine learning creates opaque and hard to understand systems using data and technology. It can be hard to predict results from machine learning, especially if there isn’t a lot known about the data set or the algorithm being used.
This is where design is key. UX and product design take the capabilities, ideas, and policies of an idea or a solution and turn it into a usable experience that lets consumers understand what that product is doing and how it does it.
The ethical and practical considerations of machine learning have to be shaped by how products using machine learning affect users and how users can understand and see those effects. Illustrating all the different components are key–what kinds of questions are being asked, what the intended solution is, what the specific algorithm was designed or intended to do, what is the data set in question, and what’s in that data set. There are a lot of moving parts. If the conversation on how algorithms are made and what they do doesn’t include designers, systems will continue to be made that are hard to understand, obfuscate the intent, and ultimately harm.
Google “professional hair” and see the results. It’s a lot of very typical white hairstyles. Google “unprofessional hair” and you’ll see mainly black hair styles. Professor LaTanya Sweeney of Harvard’s Data Privacy Lab did a study showing that if you search black-sounding names, Google pulls up sponsored ads related to arrest in the search results. Sweeney Googled her own and it brought up questions of “LaTanya Sweeney arrested?” Professor Sweeney has no criminal record. ProPublica has reported on predictive justice and policing integrated into courtrooms. Algorithms, and machine learning, can have erroneous and incredibly biased results that can hurt people.
But there is hope. In March, the AAAI conference–an academic conference on engineering and technology hosted at Stanford–had a track dedicated to machine learning and UX. The track was led by Mike Kuniavsky of Parc; Elizabeth Churchill, director of UX at Google; and Molly Wright Steenson, professor in the School of Design at Carnegie Mellon University. This was one of the first conferences I had attended that really blended academic and industry concerns, engineering, and user experience design. “You need a general literacy, and a design framing or orientation that becomes part of the way someone breathes. So in that moment under pressure, when they are making a design decision, whether it’s an interaction decision or engineering decision, it’s as natural as breathing for them to think about the consequences of human interaction or the ethical interaction,” Churchill said. Any good designer will tell you why design thinking is needed when creating a product, an app, or an idea for human consumption. Usability is key, but “usability” has to mean guarding against erroneous results that can dramatically affect the intended use of your project.
The future of machine learning is coming up with a hybrid language that bridges design and engineering, with a focus on ethical and causal effects for consumers. Johanne Christensen, PhD candidate at NC State University in computer science, focusing on UX and machine learning, articulates the problems with algorithms for users thusly: “When users don’t understand how an algorithm gets its results, it can be difficult to trust the system, particularly if the consequences of incorrect results are detrimental to the person using it. Transparency communicates trust.”
But how that transparency is articulated to users is a design challenge, and it requires designers to understand data. Here’s an idea that’s purely an illustrative example: If you are creating an app to help recommend fitness suggestions based on health data, you need to know what kind of health data you are using. You want to know where the data comes from, how old it is, and how many different kinds of body types, ages, people, and locations are in your data set.
These aren’t details designers typically concern themselves with, but they need to. The product you are building uses a specific kind of algorithm and how that algorithm responds to a specific data set is a design effect–whether or not you intend it, and whether or not you know what the outcome will be.
Caroline Sinders is a machine learning designer, user researcher, artist, and digital anthropologist.