Fast company logo
|
advertisement

Users must be empowered, not overpowered, by technology. The only way to stop questionable AI from collecting and using personal data is to design it with the end user in mind.

Seven very simple principles for designing more ethical AI

[Images: blickpixel/Pixabay; AnonMoos/Wikimedia Commons]

BY Frida Polli4 minute read

No matter how powerful, all technology is neutral.

Electricity can be designed to kill (the electric chair) or save lives (a home on the grid in an inhospitable climate). The same is true for artificial intelligence (AI), which is an enabling layer of technology much like electricity.

AI systems have already been designed to help or hurt humans. A group at UCSF recently built an algorithm to save lives through improved suicide prevention, while China has deployed facial recognition AI systems to subjugate ethnic minorities and political dissenters. Therefore, it’s impossible to assign valence to AI broadly. It depends entirely on how it’s designed. To date, that’s been careless.

AI blossomed with companies like Google and Facebook, which, in order to give away free stuff, had to find other ways for their AI to make money. They did this by selling ads. Advertising has long been in the business of manipulating human emotions. Big data and AI merely allowed this to be done much more effectively and insidiously than before.

AI disasters, such as Facebook’s algorithms being co-opted by foreign political actors to influence elections, could and should have been predicted from this careless use of AI. They have highlighted the need for more careful design, including by AI pioneers like Stuart Russell (often called the father of AI), who now advocates that “standard model AI” should be replaced with beneficial AI.

Organizations ranging from the World Economic Forum to Stanford to the New York Times are convening groups of experts to develop design principles for beneficial AI. As a contributor to these initiatives, I believe the following principles are key.

Make it easy for users to understand data collection

The user must know data is being collected and what it will be used for. Technologists must ensure informed consent on data. Too many platforms, across a whole host of applications, rely on surreptitious data collection or use data that was collected for other purposes. Initiatives to stop this are cropping up everywhere, as with the Illinois law requiring that video hiring platforms tell people that AI may be used to analyze their video recording and how the resulting data will be used.

Data privacy and ownership

Users must own and control their data. This is counter to the prevailing modus operandi of many tech companies, which have terms of service designed to exploit user data for the benefit of the company. For example, a tool called FaceApp is collecting millions of user photos, without any disclosure of what data is collected and for what purpose. More alarming, the user interface blurs the fact photos leave the user’s local storage. Users must be empowered, not overpowered, by technology. Users should always know what data is collected, for what purpose, and where it’s collected from.

Use unbiased training data

AI must use unbiased data. Any biased data used to train algorithms will be multiplied and enhanced by AI’s power. AI developers have a responsibility to examine the data they feed into the algorithms and validate their objectivity to confirm that they do not include any known bias.

For example, it’s been well established that data gleaned from résumés is biased against women and minority groups, so let’s use other types of data in hiring algorithms. The San Francisco DA’s office and Stanford created a “blind sentencing” AI tool, which removes ethnic info from data used in criminal-justice sentencing. This is just one example of using AI to eliminate, rather than double down on, bias.

advertisement

Audit algorithms

It’s not enough to use unbiased data. A math quirk known as Simpson’s paradox shows how unbiased inputs can yield biased results. It is also critical to check your algorithms for bias. Don’t let skeptics misinform you. It is possible to audit an algorithm’s results to test for unequal outcomes across gender, race, age, or any other axis where discrimination could occur. An external AI audit serves the same purpose as safety-testing a vehicle to ensure it passes safety regulations. If the audit fails, the design flaw causing it must be found and removed.

Aim for full transparency

White-box AI means there is full transparency of the data that goes into the algorithms and the outcomes. You can only audit an algorithm to potentially reconfigure its biased output if it’s white-box. There can be a trade-off between explainability and performance. However, in fields like human resources, criminal sentencing, healthcare, and others, explainability may always win over pure performance because transparency is key when technology impacts people’s lives. If your model isn’t fully transparent, there are open-sourced methods to help partially explain decisions.

Use open-source methods

Open-source methods should be utilized, either by releasing key aspects of the code as open-source or using well-established and peer-tested existing code. The visibility it offers allows for quality assurance. With the case of algorithm auditing, it is essential to understand the process by which companies are auditing (i.e. safety-testing) their algorithms. Initiatives to open-source this auditing technology are already underway.

Involve external councils to create guardrails

An active community of industry leaders and subject-matter experts should be involved in cementing the rules of engagement for building new AI ethically and responsibly. An open discussion should offer a full accounting of the different implications of AI technology as well as specific standards to follow.

As history has shown, innovation invites fear and initial failures of usage. However, with the right design and guardrails, innovation can be harnessed for a positive impact on society. And so it is with AI. With careful forethought and deliberate efforts to push back on human bias, AI can be a powerful tool not just to mitigate bias, but to actually remove it in a way that is not possible with humans. Imagine life without electricity: a world of darkness. Let’s not deprive ourselves of the positive impact of ethical AI.

Frida Polli, PhD, is the founder and CEO of Pymetrics.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

Explore Topics