Fast company logo
|
advertisement

In the absence of rules around algorithms, activists, lawyers, and tech workers are hacking transparency through other means.

How to lift the veil off hidden algorithms

[Images: Daria Vasenina/iStock; Markus Spiske/Unsplash]

BY DJ Pangburn10 minute read

In 2012, the New Orleans police department quietly partnered with the data mining company Palantir to implement a predictive policing system to help identify likely criminals and victims. For six years, neither the city council nor the courts were told that citizens’ data was being mined to generate police “target lists” and investigate individuals. Questions about the program’s propriety, legality, or value were never addressed. Ron Serpas, the city’s police chief at the time, told reporter Ali Winston last year, when he revealed the program, “It is, to me, something that certainly requires a view, requires a look.”

In March, New Orleans officials said the contract with Palantir would not be renewed, but the relationship exposed a broader concern about how the government uses algorithms and data. New software is entering the public sector, helping to identify criminals, match students with schools, guide criminal sentencing, and help determine government benefits. But few citizens have any idea that the technologies exist, or how they’re being used. If the public is aware, trade secrets and non-disclosure agreements prevent any deep analysis of just how the technologies work.

“These companies act out of private self-interest, but their decisions have considerable public impact,” Elizabeth Joh, a constitutional law professor at the University of California, Davis who studies the surveillance industry, wrote in 2017. “The harms of this private influence include the distortion of Fourth Amendment law, the undermining of accountability by design, and the erosion of transparency norms.”

In a forthcoming paper, Sonia Katyal, co-director of the Berkeley Center for Law and Technology, argues that intellectual property laws protecting government-purchased software are quietly outstripping civil rights concerns. “In the context of artificial intelligence, we see a world where, at times, intellectual property principles prevent civil rights from adequately addressing the challenges of technology, thus stagnating a new generation of civil rights altogether,” she writes.

Amid the transparency void, activists, lawyers, and tech workers are turning to a growing array of tactics to bring more oversight to algorithms and other technologies, says Amanda Levendowski, a clinical teaching fellow with the Technology Law & Policy Clinic at NYU Law.

“It’s impossible for the public to engage in discourse about whether an AI system is fair, accountable, transparent, or ethical if we don’t know that an AI system is being used to watch us–or if we don’t know the technology exists at all,” she says.

New laws

One significant obstacle to transparency in the U.S. is the Computer Fraud & Abuse Act, written back in 1984 to protect government agencies’ computer systems against hacking. Private-sector companies are able to avoid external audits of their systems–for bias or other flaws–by claiming auditing is a form of unauthorized access, says Shankar Narayan, Technology and Liberty Project Director at ACLU Washington. Reform of the CFAA could help pave the way for independent audits of algorithmic systems used by various government agencies.

Federal legislation mandating transparency, however unlikely it may be, would be a strong bulwark against hidden data technologies. That could include a new regulatory body to oversee software and algorithms, in the spirit of the U.S. Food and Drug Administration.

“The FDA was established in the early 20th century in response to toxic or mislabeled food products and pharmaceuticals,” media scholar Dan Greene and data scientist Genevieve Patterson recently argued in IEEE Spectrum. “Part of the agency’s mandate is to prevent drug companies from profiting by selling the equivalent of snake oil. In that same vein, AI vendors that sell products and services that affect people’s health, safety, and liberty could be required to share their code with a new regulatory agency.”

In a report published in December by AI Now, a group of researchers echoed that idea, pointing to examples like the Federal Aviation Administration and National Highway Traffic Safety Administration. They also called on vendors and developers who create AI and automated decision systems for government use to waive trade secrecy or other legal claims “that inhibit full auditing and understanding of their software.”

In the absence of significant state and federal laws regarding algorithms, some municipalities are implementing their own transparency rules for public sector algorithms. In 2017, the New York City Council passed an algorithmic accountability law creating a task force to provide recommendations on how city agencies should share information related to automated decision systems, and how these agencies will address instances where they harm people.

During the bill’s hearing, “[sponsor James Vacca] didn’t even know every agency that’s using algorithms or how they’re using those algorithms,” says Levendowski, who has been tracking the procurement of surveillance technology for the last five years. Neither did some of the representatives of the agencies themselves. “And if those folks don’t know, it’s going to be really hard for the public to figure this out.”

A number of cities in California have passed some of the strongest ordinances governing procurement of surveillance technologies. Santa Clara County passed a surveillance ordinance in 2016 that demands that each government agency create a policy that discloses “information/data that can be collected, data access, data protection, data retention, public access, third-party data sharing, training, and oversight mechanisms.”

Last year, the city of Berkeley passed a law that creates a more transparent process for purchasing a fast-growing arsenal of surveillance gear, like facial recognition equipment, Stingray cell site simulators, social media analytics software, and license plate readers. Oakland followed suit. Seattle and Cambridge, Mass. are among a handful of other cities that have also passed surveillance ordinances.

In Washington State, an algorithmic decision-making bill has recently been introduced in the legislature. “We’ll make a much stronger push against trade secrets and other ways to defeat transparency because we don’t think it’s constitutional for these black box algorithms to be making decisions on behalf of government agencies,” he says.

A 70-year-old federal law has also given way to a surprising transparency tool, says Levendowski: the federal trademark registry. Known officially as the Trademark Electronic Search System, or TESS, the federal trademark registry is a free and easily accessible database of millions of pending, registered, canceled, expired, or abandoned trademarks. These trademark filings can be revealing, and unlike regulatory filings or disclosures in patent applications, searching them–with terms like “AI,” “defense,” or “surveillance”–requires little legal or technical expertise.

Taking algorithms to court

When polite requests for transparency don’t succeed, lawyers and activists have successfully argued that trade secrets and NDAs violate individual rights to due process. Generally, due process allows people to defend the rights afforded to them by U.S. law and to contest actions proposed by the government.

In one case, K.W. v. Armstrong, the ACLU of Idaho argued that the state’s Medicaid assessment algorithm for developmentally disabled adults violated their right to due process. The court agreed, finding that the algorithmic formula was so faulty–producing arbitrary results for a large number of people–that it was unconstitutional. The judge ordered an overhaul to the system, the ACLU wrote in a blog post, including “regular testing, regular updating, and the use of quality data.”

Other legal maneuvers in the transparency toolkit include Freedom of Information Act requests and, if necessary, lawsuits to enforce compliance.

In June 2016 the Brennan Center for Justice at New York University’s School of Law filed a Freedom of Information Law (FOIL) request with the NYPD “seeking records relating to the acquisition, testing, and use of predictive policing technologies.” After the NYPD resisted the FOIL request on the basis of trade secrecy and police confidentiality, the Brennan Center filed suit in September of 2017, noting that the City of New York had already spent nearly $2.5 million on predictive policing software made by Palantir.

The lawsuit also revealed the department tested two other predictive policing platforms by the companies KeyStat Inc. and PredPol. NYPD documents, meanwhile, showed that the agency had no serious privacy policy governing its use of predictive policing software, and could not produce records showing it had conducted mandatory audits.

In December 2017, a judge rejected the NYPD’s claims. While the court denied the Brennan Center’s request for the data used to train the predictive policing algorithms, it ordered the police department to make available the email correspondence with vendors, output data from the tests, and notes by the NYPD’s assistant commissioner of data analytics.

“Citizens have the right to know about the tools, costs, and standard practices of law enforcement agencies that police their communities,” wrote Rachel Levinson-Waldman, a staff attorney at the Brennan Center. “In this case, it took a year and a half and a lawsuit to ultimately get even a portion of the information that should have been made available to the public at the outset.”

Last week, the NYPD lost another battle in its fight for secrecy. A group of Black Lives Matter activists sued the police department after it refused to confirm or deny whether it had used surveilled them and interfered with their cellphones during a protest in 2014. A court refuted the NYPD’s claim that even confirming records would reveal trade secrets or hurt public safety. The ruling requires the NYPD to disclose records related to its use of Dataminr, a social media surveillance software package, as well as records about technologies used to interrupt cell phone service.

“This court recognizes that respondent does not have to disclose how it conducts criminal investigations. But this is not about a criminal investigation or a counterintelligence operation,” wrote Manhattan Supreme Court Justice Arlene Bluth. “It arises from reports of protestors who claim that their cellphones are suddenly unable to function while in the middle of a protest. That possibility, that respondent is interfering with protestors’ ability to communicate with each other, is a serious concern ripe for the use of FOIL.”

Change from the inside

Katyal, the Berkeley professor, argues that because the critical civil rights questions around new technologies lie in the domain of the private and not the public sector, “we are looking in the wrong place if we look only to the state to address issues of algorithmic accountability.”

One potentially powerful force for transparency in software, she says, must come from inside the companies. Last year tech employees began to find their voices, with much of the activism focused on government contracts. In a number of employee-signed letters, workers cite concerns of algorithmic bias and the overall ethics of using algorithms, and, in some cases, ask their companies to pull out of government contracts entirely.

This spring, 3,100 Google employees protested the company’s bid on Project Maven, a Pentagon project that would leverage AI to analyze drone videos. The company said it would pull out of efforts to win the contract and created a set of AI principles, which specifically state that Google, among other things, won’t create products that create or reinforce bias, make technologies that cause harm, or build systems that use information for surveillance purposes.

Inspired by the Google employees, Microsoft employees called for the company to form AI principles that would bring transparency to the product development and sales processes. “How will workers, who build and maintain these services in the first place, know whether our work is being used to aid profiling, surveillance, or killing?” the employees wrote.

In their recent report, the AI Now researchers highlight the emerging importance of tech workers in demanding transparency, given that many engineering employees “have considerable bargaining power and are uniquely positioned to demand change from their employers.”

The report calls for tech companies to provide employee representation on boards of directors, and for companies to create external ethics advisory boards as well as independent third-party monitoring and transparency efforts. Microsoft, Google, and Facebook have announced internal ethics projects aimed at evaluating the societal impact of their software, and last year body camera and stun gun maker Axon launched an ethics board. The board is made of external advisers–including ethicists, police officials, and a lawyer who has staunchly opposed the company’s face recognition ambitions–but the board’s recommendations to the company will not be binding.

Whistleblowers have also proved critical in exposing highly controversial technologies, including Google’s Dragonfly project, which sought to bring a censored version of its services into China. After employees disclosed the project to The Intercept in August, widespread employee protests and public outcry reportedly led the company to effectively suspend the project.

Still, company executives have not publicly said they will cease Dragonfly’s development. On January 3, prominent Google engineer Liz Fong-Jones, a vocal critic of Dragonfly and Project Maven, said she would be resigning from the internet giant after 11 years because she was dissatisfied with its direction and “lack of accountability and oversight.”

Going forward, the value of employees in ensuring transparency and accountability is likely to grow. Beyond encouraging whistleblowers to speak up, the major challenge will be how to afford these individuals whistleblower protection, whether through government laws or corporate policies, amid the tech industry’s famous culture of secrecy.

“Fortunately, we are beginning to see new coalitions form between researchers, activists, lawyers, concerned technology workers, and civil society organizations to support the oversight, accountability, and ongoing monitoring of AI systems,” says the AI Now report.

advertisement

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

DJ Pangburn is a writer and editor with bylines at Vice, Motherboard, Creators, Dazed & Confused and The Quietus. He's also a pataphysician, psychogeographer and filmmaker. More


Explore Topics