Tech companies are slowly acknowledging that tech isn’t always a force for good. It can also spread misinformation, entrench bias, and erode standards of respect and privacy. But the question remains: What can technologists and designers actually do to anticipate tech’s potential downsides?
Many have posited that a code of ethics for technology–similar to the moral guidelines that bind doctors and lawyers–can help ensure better decision-making, an idea that companies are slowly adopting. Google used its code of ethics (as well as an employee uprising) as the reason for not renewing a $10 billion defense contract with the Pentagon. Several companies have employees whose job is to help the company navigate the potential minefields that AI could bring to its products. Outside of the commercial space, institutions and academics are trying to build standards for the ethical use of technology–particularly AI–as well. In August, the software engineering professional organization Association for Computing Machinery issued its own “Hippocratic Oath” for developers.
But does any of this work?
A new study by North Carolina State University professor Emerson Murphy-Hill found that the ACM’s newly developed code of ethics had no impact on the decision-making of computer scientists. But something else did: history. The researchers showed the code of ethics to one group and not to another, and then presented all of the participants with ethical situations and asked them to make a decision. A subset of the programmers who participated in the study had heard of one of the historical situations the researchers presented–Volkswagen’s efforts to trick emissions testing, aka Dieselgate. Interestingly, all of these people made decisions suggesting that they wouldn’t build software to evade regulations–far more than participants without knowledge of the Dieselgate scandal.
“A better way to teach software developers ethics might be to focus on critical incidents in the past, and help developers draw parallels between their work and those incidents,” Murphy-Hill tells Fast Company via email. “In the paper, we did find that awareness of high-profile incidents changed the decisions that participants reported that they would make.”
In the study, Murphy-Hill and grad students Andrew McNamara and Justin Smith tested 168 programmers (both professionals and students) on software-related decisions that relate to ethics in some way, drawn from the forums of the programming website Stack Exchange. In a first round of interviews, they asked developers if they recalled any decisions they’d made that might have had ethical implications; none of the participants could recall any. Then, they showed one group of programmers the ACM’s code of ethics, and another nothing, before asking them to decide how they would act in particular situations. After analyzing the differences in answers between the two groups, the researchers found that reading the code of ethics did not have any significant impact on ethical decisions.
“Overall there’s a lack of evidence for what a professional code of ethics can be used for effectively,” Murphy-Hill says. “There are some potential uses–like as a way for ethics professionals within large organizations to structure their thinking about important policy decisions–but it remains unclear whether a professional code of ethics can effectively serve such purposes.”
The study focused on software engineers, but there’s a good chance the results apply to the design profession, too, since designers’ work interfaces directly with users. Dark patterns–UX that deliberate tricks users into doing something that they don’t want to–are an example of designers making decisions based on some lack of ethical code. There’s a growing movement toward healthy, less addictive UX, where the most important metric isn’t convincing people to spend as much time as possible on your website or app–instead, you design for behaviors that will make people happier in their lives and more effective in their jobs. Codifying ethical standards might not represent the best path toward healthy UX, but perhaps studying past dark patterns and user abuse could help.
Murphy-Hill plans to continue his research into ethics at Google, where he starts as a research scientist this week.