We’re often told that we live in the greatest era of technological progress in history. But this can’t be true if the technologies we depend on are hard to use.
If you’re like most people, you’ve accidentally left the mute button on, or off, in a video call. You probably feel overwhelmed by your email inbox, Slack messages, or phone notifications. It’s likely you create important documents in the cloud but struggle to find them later. We blame ourselves for these problems despite the fact that the companies that sell these products are responsible.
Over the last year we’ve seen many shocking examples of tech failure by design. The rollout of vaccines in America was hamstrung by confusing registration websites. Citibank lost $500 million because an internal tool for managing loans was too confusing for employees to use. And the Robinhood trading app, designed to encourage users to trade often, misled a young man into erroneously believing he was more than $700,000 in debt, and in his despair he took his own life. Making products that are well designed is not nearly as common as it needs to be.
Nearly 80 years ago, we learned the hard way that when people make mistakes, it’s usually the fault of the technology itself, not the people using it. When trying to design safer aircraft during WWII, cognitive psychologists Paul Fitts and Richard Jones proved it was the design of cockpits—how different buttons and levers were shaped and positioned—that explained crashes far more than what pilots did or did not do. This paved the way for entire professions, including usability experts and user experience designers, that focus on making things so people can use them effectively.
Yet decades later, we frequently ignore these experts and their advice. Consider how the $2 trillion F-35 project was recently reported as not only being unsafe to fly, but a boondoggle of conflicting priorities and competing goals. Or that the streaming platform Quibi, which raised $1.75 billion, shut down in just six months because it failed to do design fundamentals like identifying a clear problem to solve. It’s notable that both of these examples were also foiled by bureaucratic and management incompetence, which is routinely the enemy of good design.
The basics of good design are often ignored in part because technology and design are taught as separate subjects. I studied computer science at Carnegie Mellon University, one of the best technology schools in the world. Absent from required classes, then or today, are the interdisciplinary insights that are essential to designing good things for people. Instead, students are taught to build things that function technically but not humanely.
This is compounded by the fact that engineers are overwhelmingly white young men and tend to operate under the assumption that what works for them will work for everyone. Without expert help we are all prone to forgetting that a design that’s intuitive to us can simultaneously be confusing to others. The damage of this natural bias of “what’s good for me is good for all” goes deeper into society—and can have devastating consequences. Consider how women are more likely to be injured in car accidents because crash test dummies are usually based on men’s bodies. Or how facial recognition and machine learning systems often favor white people, who are better represented in the data used.
It’s not just built-in biases that work against good design. Sometimes businesses have motivations that work against good design too. Design pioneer Victor Papanek described this as the difference between design for sale and design for use. Think about how Ikea furniture is designed to sell: It’s attractive and affordable. Its stores and catalogs are designed to be beautiful and inviting. Yet once you get your new bunk bed home, it’s an exercise in frustration to put together—despite the cute-looking instructions.
It’s only after our purchases that we understand the true design of what we bought, and that the initial user experience was designed primarily for sales. Papanek implored us to see that products that sell the best, or are the most popular, can have the worst designs for us or for society at large. We often buy products based on impulse, or how many features they have, which can have little relationship to what problems they will truly solve or how easy they are to use. Papanek’s theory explains how things like free social media sites, which are very popular and financially successful, can work against the privacy and mental health of their own customers. Businesses can balance design for sale with design for use, but many don’t, either because they don’t know how or because they need the marketplace and consumer pressure to prove it’s worth the investment.
We need to shift how we measure progress away from the potential in a technology and toward what people are actually able to achieve with it. It’s time to stop trusting the promises of corporations that the next version will be different. If they’re using the same people and values to make design decisions, why should we expect different results? We must recognize that businesses are themselves designed to put their self interests first, despite what their advertising and marketing says.
Everyone, from consumers to programmers to business leaders, must become more educated about what good design really means. For consumers, this isn’t necessarily to become designers themselves, but to become better judges of the true value of things before they buy them. Technologists and businesspeople need to understand the common traps that lead to bad design and do what they can to reduce them. This is often as simple as valuing design experts enough to listen to them at the start of projects when the important decisions are made, rather than at the end when their advice will be far too late.