FOLLOW

FOLLOW

SHARE

The Morality Of Machine Learning

Can data solve crime?

24Aug

In Phillip K Dick’s science fiction short story Minority Report, three mutants, known as ‘precogs’, have precognitive abilities which enable them to see up to two weeks into the future. The precogs are strapped into machines, while a computer listens to their apparent gibberish, which it interprets into predictions of future crimes to take place.

The appeal of such a system to police forces is obvious. Saving a life is clearly of greater benefit than just catching the killer. And thanks to predictive analytics and machine learning, police forces across the US are now some way to achieving this dream. However, in Dick’s story, protagonist John A. Anderton ultimately reveals the central flaw in the whole system - that once people are aware of their future, they can change it. Some are now also arguing that using machine learning as a means to predict likely criminals is flawed too, accusing such algorithms of perpetuating discrimination and structural racism in the police force. But are they right?

The lack of bias in Big Data is often cited as one of its major plus points, and is central to why it has been taken up by such vast numbers of organizations, which are seeking to rid themselves of often-flawed human intuition. The Chicago Police Department recently joined forces with professor of electrical engineering at Illinois Institute of Technology, Miles Wernick, to create a predictive algorithm that generated a ‘heat list’ of 400 individuals that have the highest chance of committing a violent crime. By focussing on likely suspects, the police say that they can concentrate their scarce resources where they are needed.

Wernick’s algorithm looks at a variety of factors, including their arrest histories, their acquaintances' arrest histories, and whether any of their associates have previously been the victim of a shooting. Wernick argues that the algorithm uses no ‘racial, neighborhood, or other such information’, and that it was ‘unbiased’ and ‘quantitative.’

The danger, however, is that these algorithms may reflect biases inherent in that data. Machine learning is so effective as a framework for making predictions because programs learn from human-provided examples rather than explicit rules and heuristics. Data mining looks to find patterns in data, so if, as Jeremy Kun argues, ‘race is disproportionately (but not explicitly) represented in the data fed to a data-mining algorithm, the algorithm can infer race and use race indirectly to make an ultimate decision.’ The defining factor of crime is poverty, and this is an issue that still disproportionately impacts black people. It could also be argued that things like arrest histories have been affected by previously existent structural racism, and by feeding this information into an algorithm all you are doing is scaling stereotypes and reinforcing it with something seemingly unbiased.

However, predictive algorithms and machine learning can be easily and quickly adapted to interpret data and leverage predictions in real time. Prejudices that people hold as to likely criminals have far more chance of lasting longer, and far more chance of being based on aesthetic factors, from skin colour to clothes. 

Los Angeles, Atlanta, Santa Cruz and many other police jurisdictions have a similar predictive policing tool called PredPol, and have subsequently reported double digit reductions in crime. In the current climate, with racial tensions at boiling point and police mistrust in the black community at all time highs, it is important that the police be as transparent as possible. This includes keeping us informed as possible as to how they are using predictive algorithms, else public mistrust could hamper the use of what could be an incredibly beneficial tool.

Comments

comments powered byDisqus
Thumbnails pt 2.015

Read next:

Injury Detection and Prevention in Youth Sports

i