Fast company logo
|
advertisement
2019 is the year to stop talking about ethics and start taking action

BY Katharine Schwab7 minute read

2018 was a year of reckoning for tech companies, their employees, and consumers. Both Facebook and Google were caught misusing people’s personal information–landing their leaders in front of the Senate. There was public outcry over Amazon licensing biased facial recognition software to ICE and police departments. Tech employees got fed up with their companies, sparking protests across Silicon Valley. With so little oversight from regulators and continued poor judgment on the part of big companies, both consumers and makers of tech were asking: What does it mean to develop technology in an ethical way?

So far, that question has instigated a lot of talk, but 2019 is the year to take action. How? Here are seven do’s and don’ts for any company or individual dedicated to developing ethical technology in 2019.

[Photo: Hero Images/Getty Images]

Don’t bother with a code of ethics

In March, Google’s employees protested against the company’s contract with the Department of Defense to build accurate drones using AI, with thousands signing a petition and some even resigning from their jobs. The company responded a few months later by releasing a code of ethics meant to govern how it develops AI. One part promised that Google would not develop AI that could be used in weapons.

But there’s a problem with codes of ethics–they don’t work. A 2018 study from North Carolina State University found that the software engineering professional organization Association for Computing Machinery’s newly issued code of ethics had no measurable impact on the decisions of its members in questionable situations involving technology.

While codes of ethics could help large organizations with policy decisions–after all, Google’s code of ethics for AI (as well as the employee uprising) led to the company declining to renew its $10 billion contract with the DoD–they aren’t binding in any legal sense. Google could say one thing publicly and do another behind closed doors, without any consequences.

[Photo: Toa Heftiba/Unsplash]

Do take a class (or just read the news)

There’s a better way to help designers and engineers act more ethically when developing technology–educate them.

That’s the idea behind a series of ethical tech classes that have sprung up in places like Carnegie Mellon University, where computer science professor Fei Fang began teaching a class called Artificial Intelligence for Social Good. The idea: If computer science students can learn to think about the potential impact of their code, they’ll be more likely to make ethical decisions. The Mozilla Foundation is also throwing its weight behind this idea, with a multi-year competition that offers cash prizes to encourage professors to come up with ways of teaching ethics to computer science students that won’t make them fall asleep at their desks.

As for folks who are out of school: Take an online class. Or just read the news. As the North Carolina State University study on codes of ethics pointed out, developers who were more informed about current events were more likely to make more responsible decisions about how to develop technology compared to those without knowledge of those events.

[Source Images: Apple, VLPA/iStock]

Don’t blame users

One of the biggest questions about ethical technology has to do with who really has control over the ways users behave, especially when it’s to their detriment. Case in point: People spend too much time on their phones, but their phones were designed to be addictive. Same goes for platforms like YouTube, whose autoplay feature is designed to pull you down a rabbit hole of stupid videos, and before you know it your afternoon is gone. Is that your fault or YouTube’s?

After years of blaming the user for their weak wills, tech companies are finally taking some responsibility for their designs, but with baby steps. Apple and Google released features that claim to be in the service of a user’s digital well-being, such as screen time monitors that inform users of how much time they’re spending on their phones, more ways to limit notifications, like turning them off completely around bedtime, and app timers that allow users to set limits on how much time they want to spend in certain apps.

Unfortunately, these moves have done little to address the underlying problems that cause digital addition in the first place. For instance, YouTube released a new tool to let you know how much time you’re spending watching YouTube–but the company hasn’t made it any easier to turn off the autoplay feature in your account settings. That’s a real design change that would be in users’ best interests.

Companies can’t afford to keep that up. When building a product, designers should make the default settings the ones that will be best for users. Firefox is a good example. After completing more usability tests, the browser will start blocking third-party trackers, which collect your data as you surf the internet, by default.

advertisement
[Photo: Artefact]

Do ask questions, and a lot of them

It’s obviously difficult to know when a product will have unintended consequences. But a handy tool created by the Seattle-based design firm Artefact can help you make sure you’re at least asking the right questions when you design a new product.

It comes in a quirky format–a tarot card deck–where each card presents a series of questions that designers should be addressing or at least contemplating while developing any kind of product or service: “Who or what disappears if your product is successful?” “If two friends use your product, how could it enhance or detract from their relationship?” “What could cause people to lose trust in your product?” Mulling these questions, even if you don’t have all the answers, is an important first step toward designing ethically.

[Source Images: Jason Jaroslav Cook/Getty Images (pattern), winterling/iStock (photo)]

Do embrace transparency

Good design is transparent design. Of course you don’t want to overwhelm users with too much information, but you have to make sure they know what they’re consenting to when using a service or product. For instance, users discovered that Facebook was collecting their call and text message data without their knowledge in March 2018. And Facebook doesn’t clearly inform its users that it tracks more than just what they click on while on Facebook–it also tracks users’ mouse movements, which was revealed in a document submitted to the U.S. Senate Committee on the Judiciary in June. In December, emails from Mark Zuckerberg revealed that the company gave user data to certain companies, like Netflix and Airbnb, without users’ consent. In all likelihood, the revelations about how Facebook has used people’s data without consent will continue in 2019. Either way, the company certainly has a lot more information about you than you probably realize.

Transparency is particularly important with artificial intelligence. When Google premiered its latest conversational AI technology Duplex earlier this year, it caused an uproar because it didn’t include any kind of disclaimer that the very human-sounding voice was actually a robot. (Google quickly announced that it was planning to have a disclaimer when the tech was released to the public.) Don’t make the same mistake.

[Image: ThingsCon]

Do get certified

Even if you are putting users’ interests first in your design, you might want to get some outside validation. There are now multiple ways to broadcast that you’ve developed technology responsibly. Statistician and author Cathy O’Neil runs a consulting service that will audit your algorithm to ensure that it’s accurate, doesn’t discriminate against protected groups like women or minorities, and abides by civil rights laws. One startup that underwent the audit found that a potential investor was deeply impressed, which helped the company secure funding.

On a similar note, Internet of Things companies can now perform a self-assessment and submit the results to a Mozilla-funded, NYU-affiliated nonprofit that will grant a “Trustable Technology Mark” if the product is up to its high standards.

Both of these certification programs are a great way to prove to users that you really do have their best interests at heart.

[Photo: courtesy Are.na]

Do turn users into owners

Perhaps the most extreme way to ensure that you’re developing technology ethically is to make your users the owners of your platform–not venture capitalists, not shareholders. That’s exactly what the subscription-based social media startup Are.na did in 2018. While the platform has been around for many years, its subscriber numbers took off in 2018, with all of its revenue coming from users’ subscriptions–not from advertisers. Earlier this year, the company also launched a crowdfunding campaign that allowed anyone to invest in Are.na. Ultimately, Are.na raised $270,00 from nearly 900 investors, the majority of whom were people already using the platform.

It’s a radical form of ownership in our capitalist age, but one that other companies could consider in different capacities. What would it mean if your users were part-owners of your company? How would that change your decision-making?

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

CoDesign Newsletter logo
The latest innovations in design brought to you every weekday.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at kschwab@fastcompany.com and follow her on Twitter @kschwabable More


Explore Topics