It’s a marker of Stanley Kubrick’s cultural importance that it’s difficult to have a conversation around decision making in AI without referencing 2001. The director’s masterpiece is about competitive evolution and the inevitable tussle between intelligent AI and humans in an imagined future where the former has reached sophistication. Kubrick’s epic is one of the earliest examples of the now established concerns about AI’s decision-making - what happens when cold objectivity meets the complexities of human choices?
The potential for mass unemployment is daunting in the short term, with unskilled labour likely to be the first to perish. The further entrenchment of a value imbalance between skilled and unskilled labourers will only deepen if unskilled labour can be completed (indefinitely) by machines for a fraction of the long-term cost. But this seems an inevitability, as companies look to cut bottom lines and chase productivity. What is more concerning in the long-term is the potential for a wider system based on nothing but numbers.
This is not a discussion about humanoid robots causing carnage on the streets after a malfunction turns them murderous. It’s not about AI-run space missions allowing humans to die because it increases the likelihood of success. It’s about automation of services and businesses. How do we, as a society, protect against brutal algorithms rolled out by faceless corporations under the banner of productivity, in place of otherwise understanding members of staff? Or a shambles as an under-developed AI fails to see an obscure problem with a train timetable and causes a standstill.
A good example of AI’s potential for cold efficiency would be the UK’s welfare system. Already bureaucratic, the relationship between those on Job Seeker’s Allowance and the state is too often one of punishment and box-ticking rather than compassionate aid. Though speeding up processes with AI would improve waiting times and allow for more remote interaction, intelligent automation would be far less understanding than a supervisor. It would be far more difficult to contest a decision when an intelligent machine can display with objectivity that your ineligibility for welfare is in line with wider national objectives. Whatever your political stance, it’s tough to deny that nuance is as important in cases like welfare as it is in legal matters. Automation will get the job done, it will cut costs where it needs to and greatly speed up application processes, but it would add a further layer of detachment between those in need and the budget-holders.
If the cold calculations made by computers were to pervade society in place of human warmth or empathy, it doesn’t seem too ‘tin foil hat’ to be concerned about it. We already have examples of problematic algorithms being rolled out without due care. At St George’s Hospital Medical School in the 1980s, an algorithm was found to be discriminating against women and people with ‘non-European-looking names’ when it sifted through student applications, for example, and AI has led to unjustly lost jobs, mistaken arrests, and a host of other complications that will only grow in number as the technology proliferates.
And laws to protect people are, at present, far from effective. Scientists have developed tests to detect AI bias, but there is currently no AI watchdog in place to regulate decision making by machines. The Guardian points to the fact that ‘UK firms, in common with those in other countries, do not have to release any information they consider a trade secret.’ This would include the AI in place to decide who is granted or denied a credit card or a loan, for example, and those rejected could well be left in the dark as to why their application failed.
Without proper checks or regulatory bodies in place, victims of faulty, insensitive, or discriminatory AI systems will find themselves battling something designed to remove the necessity of (or opportunity for) discourse. An application for a credit card might not hold repercussions one might deem life-changing, but in matters like law and order, welfare, health care, or the military, unflinching objectivity should be introduced remarkably cautiously.