advertisement
advertisement
advertisement

Does your company use machine learning? Here’s how to think about the risks

Experts from the Future of Privacy Forum and the data management platform Immuta say the industry has yet to come up with a standard way to assess AI risks.

Does your company use machine learning? Here’s how to think about the risks

If you’re using machine learning in your organization, you probably should be thinking about how to manage the ethical, legal, and business risks involved if something goes wrong.

advertisement
advertisement

But according to a new paper from the Future of Privacy Forum and the College Park, Maryland, data management platform startup Immuta, there simply isn’t an industry standard framework for thinking about these kinds of issues.

“We see a really deep and pressing need for guidelines and for an actual framework to measure risk for machine learning,” says Andrew Burt, chief privacy officer at Immuta and one of the paper’s authors.

In the paper, released Tuesday, they offer some guidance to companies thinking about these issues. Among their suggestions, inspired in part by a 2011 Federal Reserve document on handling financial model risk, is that companies set up three “lines of defense” in handling artificial intelligence risk.

Those should include data scientists and other experts defining exact assumptions and goals around a project; a second team of data and legal experts who work as “validators” and review assumptions, methods, documentation, and information on underlying data quality; and a regular third line of defense involving reviews of the overall assumptions around the model and how they’re working out.

FPF & Immuta – How can we govern a technology its creators can’t fully explain? from Immuta on Vimeo.

advertisement

The exact details of how those reviews are implemented will vary from organization to organization, they say.

“In a really, really large company there might be more layers of complexity or multiple review chains or channels that are looking at different aspects of it in parallel with each other,” says Brenda Leong, senior counsel and director of strategy at the Future of Privacy Forum and an author of the paper.

The report also offers other recommendations for managing AI risk, including:

  • keeping thorough documentation of the AI model’s intended use, data requirements, specific feature requirements, and where personal data is used and how it’s protected
  • taking steps to understand and minimize unwanted biases, such as those involving race or gender
  • continuous monitoring to detect if things go wrong, or if the data the system is being used to analyze starts to deviate too much from the training data
  • making sure there’s an up-to-date understanding of how to take the AI tool out of production and what that entails for systems that rely on it
  • explicitly thinking about tradeoffs between accuracy and human understandability and what those mean for the AI system and use scenarios

The authors say they may revise their recommendations in the future, as the industry and technology continue to evolve.

“From our perspective this is a living document–this is a first attempt,” Burt says. “There is a big void out there we intend to fill with this document, and we’re under no illusions we’ve gotten this perfect.”

advertisement
advertisement

About the author

Steven Melendez is an independent journalist living in New Orleans.

More