advertisement
advertisement

The prison-reforming First Step Act has a critical software bug

The bipartisan bill has been hailed as a triumph, but its reliance on algorithms might only reinforce existing disparities.

The prison-reforming First Step Act has a critical software bug
[Photo: Rex_Wholster/iStock]

The road to hell is paved with good intentions. Often in a rush of excitement and eagerness to solve a problem, or to alleviate a particularly thorny issue, people will champion a solution without being aware of all of the possibilities and potential pitfalls, intentional or otherwise, or blatantly ignore the risks because the reward seems so great. As The First Step Act heads to President Donald Trump’s desk following widespread support in U.S. Congress, it’s important to see how the bill could be both a blessing and a curse.

advertisement
advertisement

The First Step Act has lofty and noble goals. This piece of legislation seeks to reform the criminal justice system at the federal level by easing mandatory minimums for “three strikes” offenders, instead automatically giving 25 years instead of life, lessening prison sentences via artificial intelligence algorithms that reward good behavior, allowing inmates to receive “earned time credits” by participating in more educational, trade, and rehabilitative programs, and retroactively applying the Fair Sentencing Act of 2010, which closed the sentencing gap between crack and powder cocaine prison sentences.

What is most notable is the use of technology in the bill. Though the use of technology in policing is not uncommon , predictive policing is used to discern where to dispatch officers at any given time, and facial recognition is spreading across public space, the use of artificial intelligence in determining the fate of those already imprisoned is new. But some recent events should give citizens pause about the enthusiasm of legislators to apply technology in prison reform.

Red flags abound. During a congressional hearing this week, Google CEO Sundar Pichai detailed the perceived privacy issues and biases of their products as noted by lawmakers. At one point in the questioning, Pichai had to remind Representative Steve King (R-IA) that Google did not make iPhones after King asked Pichai why his daughter’s iPhone was malfunctioning. If lawmakers can’t keep tech companies and the products they produce straight, why would we want them in charge of suggesting or selecting technology that will regulate the federal prison sentencing system? Confusing Google for Apple, and what each company produces, is but one example of how distant legislators are from the innovation they seek to implement and regulate. There is an education gap that first needs to be remedied before eagerly enacting technology lawmakers can’t even understand.

In another line of questioning Pichai noted that “algorithms have no notion of political sentiment,” in response to a question from Representative Steve Chabot (R-OH). While technically true—algorithms do not have political biases—the people who create them do, as well as gender, religious, and racial biases. Algorithms are only as good as the data they are fed, which is only as diverse and inclusive as those writing the code. If the data being fed and validation of that data does not include marginalized populations and nuance, the algorithm will be inherently, though maybe unintentionally, biased.

Furthermore, the AI systems used in criminal justice lack transparency: Who is fact checking the fact checkers? Who is setting the parameters of what is deemed relevant information to include in the decision-making process? Who is checking the rate of diverse representation in datasets to ensure they aren’t skewed, and if there is parity or equity in the information shared and diverse perspectives of data and research being used as the basis for any sort of dataset? The lack of transparency offered regarding decisions made through AI systems leads to a lack of accountability, as there is no way to thoroughly audit the information and the process. Essentially, without being able to properly audit algorithms used in sentencing, we aren’t aware of the possibly skewed outcomes, nor can we correct it sufficiently.

This leads us back to the algorithms utilized under The First Step Act, which decide who can redeem earned-time credits. Inmates deemed higher risk are excluded from participating, although not from earning the credits, which can only be used when their risk level is reduced. The larger question is what factors deem one “higher risk,” and more importantly, Who is making that decision? What is the lived experience of those setting the standards for “higher risk” inmates? Do those setting the standards understand key community and cultural criminal justice nuances, such as that black, Latinx, and poor people are more likely to be imprisoned for crimes even though they aren’t likely to commit those crimes? There is a great deal of intersection among those groups; statistically, those likely to be poor are black and Latinx individuals. Additionally, the bill creates a caste system of sorts by automatically excluding undocumented immigrants from receiving earned credits.

advertisement

In that sense, the bill does nothing to “free” the communities it claims to want to help. While the bill aims to adjust mandatory minimum sentences for nonviolent offenders, one could argue it doesn’t go far enough with initial sentencing: Judges are allowed discretion to sentence people with minor previous convictions to less than the mandatory minimums, though that isn’t required, and is offered to some, not all. Requiring that judges not give a mandatory minimum in these cases would help with initial sentencing, which is the driving force for mass incarceration. Let’s also be mindful that this administration rolled back the previous Obama administration’s discontinued use of private prisons, and transferred prisoners from public facilities to private prisons who donated significantly to the Trump campaign. Does this seem like real reform?


Related: Hidden algorithms could already be helping compute your fate


Though well intentioned, the bill is a wolf in sheep’s clothing, particularly when you add the technology component. Laws shouldn’t be written out of fear, but in a place of strength. In this post Cambridge Analytica world, there is a lot of fearmongering. Technology isn’t always the answer, particularly when ethics are paramount, and it is futile if legislators aren’t educated and informed about the use and ethical pitfalls of AI in criminal justice reform. If it isn’t monitored or adjusted appropriately, software risks helping imprison more people from marginalized, poor, and even rural communities. The real first step would be to include and consult with people who understand, write, and use the technology that the lawmakers seek to implement, before good intentions result in dire consequences.


Bärí A. Williams is vice president of legal, policy, and business affairs at All Turtles. She previously served as head of business operations, North America for StubHub, and lead counsel for Facebook, and created its Supplier Diversity program. Follow her on Twitter at @BariAWilliams.

advertisement
advertisement