FOLLOW

FOLLOW

SHARE

Machine Learning Could Help When Sentencing Criminals - If Used Right

It has drawbacks, but could be useful

16Jun

The US Justice system is a big beast, with 350,000 cases passing through the courts each year. There are roughly 6.7 million adults under some form of criminal justice supervision, which amounts to 716 inmates for every 100,000 citizens. Of these, 2.2 million are in local jails or state and federal prisons. The rest are either on probation or parole. According to a 2012 Vera Institute of Justice study, the number of those incarcerated has increased by more than 700% over the last four decades.

This situation is untenable. It is expensive and there is little evidence that incarceration actually reduces crime. Indeed, it is estimated that 70% of inmates have been imprisoned before. The level of crime in the US is similar to that of other stable industrialized nations, yet it costs $74 billion a year to run - eclipsing the GDP of 133 nations.

The reasons for the US’s high level of incarceration are complex and there is no simple solution. The problem is, however, at least now being recognized, albeit slowly. Incarceration rates have dipped slightly in recent years, primarily due to the release of thousands of nonviolent drug offenders from federal prison systems in 2015. A number of states, including California, have enacted legislation and policies to cut prison populations, retroactively reducing some drug and property crimes from felonies to misdemeanors, offering expanded substance abuse treatment programs, and increasing investment on re-entry programs.

Another solution being touted is machine learning and predictive analytics during the sentencing process. We have seen police forces use such technology to predict crime hot spots and target potential recidivists already, but using it to influence sentencing is new. Here, machine learning algorithms are applied to data to understand the characteristics of those likely to offend again and the degree to which the convicted party exhibits them too. It removes - or so the thinking goes - human bias from the equation, which should make the US court system both fairer and more effective. However, there are a number of problems with this.

Firstly, if the explanation I’ve given sounds vague, that’s because nobody outside the companies know how they reach the conclusions they do. While this might be acceptable in marketing, the same standards cannot apply when a human being’s freedom is on the line. In February of 2013, Eric Loomis was arrested while driving a car that had been used in a shooting. He pled guilty to eluding an officer and no contest to operating a vehicle without its owner’s consent. A judge rejected a plea deal and sentenced Loomis to a harsher punishment, citing a data-driven risk assessment from Northpointe called COMPAS as part of the reason. He told him that, ‘You’re identified, through the COMPAS assessment, as an individual who is a high risk to the community.’

Loomis appealed the sentence, arguing that neither he nor the judge could examine the formula for the risk assessment as it was a trade secret. The state of Wisconsin countered that Northpointe required it to keep the algorithms confidential in order to protect the firm’s intellectual property. Wisconsin’s attorney general, Brad D. Schimel, even used the same argument that Loomis did, that judges do not have access to the algorithm either, although he seems to have spun it as a positive somehow. This is a bit like saying a game of chess is fairer if neither player knows the rules. Which is true, in a way, but it’s unlikely to produce a game of chess, just two people throwing pieces round a board, which will result in no winners in the traditional sense. But the Wisconsin Supreme Court upheld Loomis’s sentence, reasoning that the risk assessment was only one part of the rationale for the sentence. It also said it wanted to carry on allowing judges to consider the COMPAS score as one part of their sentencing rationale, even if they had no clue how it was calculated.

The accused must be able to challenge an algorithmic scoring process. They need to examine how the algorithm weighs different data points, and why. As Frank Pasquale notes in an article on MIT Review, ‘A secret risk assessment algorithm that offers a damning score is analogous to evidence offered by an anonymous expert, whom one cannot cross-examine. Any court aware of foundational rule of law principles, as well as Fifth and Fourteenth Amendment principles of notice and explanation for decisions, would be very wary of permitting a state to base sentences (even if only in part) on a secret algorithm.’

This secrecy is especially worrying because of concerns already raised about how machine learnings algorithms pick up real world biases. Machine learning is so effective as a framework for making predictions because programs learn from human-provided examples rather than explicit rules and heuristics. Data mining looks to find patterns in data, so if, as Jeremy Kun argues, ‘race is disproportionately (but not explicitly) represented in the data fed to a data-mining algorithm, the algorithm can infer race and use race indirectly to make an ultimate decision.’ The defining factor in crime is poverty, and this is an issue that still disproportionately impacts black people. It could also be argued that things like arrest histories have been affected by previously existent structural racism, and by feeding this information into an algorithm all you are doing is scaling stereotypes and reinforcing it with something ostensibly unbiased. It could be that these do no such thing, but obviously how are we to know.

Justice Ann Walsh Bradley, writing for the Wisconsin Supreme Court, noted a report from ProPublica about COMPAS that actually found this to be the case, concluding that black defendants in Broward County, Fla., ‘were far more likely than white defendants to be incorrectly judged to be at a higher rate of recidivism.’ Inputs derived from biased policing will inevitably make black and Latino defendants look riskier than white defendants to a computer. As a result, data-driven decision-making risks exacerbating, rather than eliminating, racial bias in criminal justice. These algorithms may reflect biases inherent in that data, and this is not the first time its use in law enforcement has been criticised.

This is all somewhat beside the point, as COMPAS doesn’t really work anyway. The ProPublica investigation also found that COMPAS is burdened by large error rates, failing to predict reoffending in one real-world study at a 37% rate. Northpointe has disputed the study’s methodology, which is somewhat ironic given that it is only able to do so because it is afforded a luxury those sentenced using its technology do not have.

Algorithmic risk assessment is in its early stages and will get better. When it does reach a point where it is being adopted by the courts, we must be completely transparent about how the risk assessments work and ensure any bias is removed. Saying simply that it is a ‘trade secret’ is not acceptable. The Loomis case also exposed a bigger falsehood being perpetrated in the US justice system that must be recognized first - that recidivism is cured by more jail. Indeed, it seems to make it worse. Courts need to use machine learning in conjunction with policies that actually help prevent recidivism rather than exacerbate it, by identifying those most likely to reoffend and what effective action can be taken to help them. 

Comments

comments powered byDisqus
Data culture small

Read next:

Building A Culture Of Data

i