There has been much debate in recent months about the need for governments to start putting regulations in place to control the rise of AI. Tech luminaries such as Elon Musk have been vocal of their fear that if developers are allowed to continue advancing progress without restraint, the technology could end up destroying humanity. Equally, however, there are those who believe AI could save humanity, with many going so far as to argue that a society governed by AI would be preferable to our current democratic system.
This is not a new idea. Isaac Asimov first discussed the concept of AI in his short stories 'Evidence and The Evitable Conflict', from the I, Robot. collection. In Asimov’s stories, robots come to the realization that they cannot fully uphold the primary, overriding law of robotics, that ’a robot may not injure a human being or, through inaction, allow a human being to come to harm’. They calculate that the best they can do while upholding this law is to minimize universal suffering, essentially adopting a utilitarian approach whereby pain is selectively distributed in order to prevent a large amount of harm coming to humanity as a whole. The most efficient way of doing this, the robots realize, is by taking charge of government without humanity knowing.
One character, Susan Calvin, explains the logic of this move in 'The Evitable Conflict', arguing that, ‘our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and
We can already see the seeds of an AI-driven government in place today, with governments increasingly turning to machine learning to automate tasks and even make decisions. Data has been integral to decision making in the UK since John Major was Prime Minister, and this escalated under Tony Blair, who attempted to reduce everything to numbers in order to optimize public services while reducing the reliance on fallible human decision making. The problems with this have been well documented, with many looking to game the systems, but as the amount of data we collect has swollen and the technology to analyze it improved, the reliance on data has not diminished. If anything, it has grown.
Governmental decision making is today more complex than ever. Politicians are having to deal with a constantly shifting geopolitical dynamic, climate change, rising inequality, and a rapidly evolving jobs market, to name just a few. With so much uncertainty, governments are essentially reacting to events on the fly, with limited understanding of the consequences. They need data more than ever, and they need machine learning algorithms to analyze it. But at what point does the government allow it to take over entirely? Is there really a future in which AI takes over the entire function of government, including setting policy?
An AI-run government would presumably function in much the same way that the technocracy movement functioned. In a
The arguments for putting AI in charge are strong. Power corrupts. It has long been humanity’s downfall. If machines have all the power, humanity is freed from an influence that has reaped havoc and sown discord and war since the beginning of time. In a recent article, Scott Beauchamp cites Richard Brautigan’s 1967 poem ‘Watched Over by Machines of Loving Grace', 'where humans are joined back to nature,/returned to our mammal/brothers and sisters’ through ‘cybernetic ecology’. Beauchamp interprets this as meaning that humans beings are fundamentally too flawed to be trusted with their own paradise. Unfettered by personality, machines would be rulers without greed, fear, hate, or love, going about the drudgery of administering to human clients free of the disastrous trappings of the ego. 'It’s a political dream that Brautigan imbues with religious overtones. Not only would machines free us from toiling on the Earth; humans themselves would be transformed and returned to some Rousseauvian state of idyllic primitive bliss.'
This is an
Firstly, all government policy concerns tradeoffs, many of which will result in deaths.
For example, should people be locked away if they have not committed a crime yet, but we can say with 95% statistical certainty that they will? Who sets the level of risk? Should everybody have access to the same health care, regardless of economic status? Should expensive treatments be given to virtuous people over convicted criminals? Who is to determine morality if a crime is unrelated to human life and death? Furthermore, what does it mean for regulations? Does AI outlaw every activity that it deems to be too risky? More people die playing golf than any other sport, primarily due to the age of participants, but would AI ban golf? Human beings could, ultimately, be regulated non-diversity with no choice other than those that would not risk their lives.
In some ways, the idea that human beings would be reluctant to hand over control to AI has already been answered. We have already seen recent governments turn over decision making to the whims of the markets, as Bill Clinton did under the advice of Alan Greenspan. This caused widespread suffering, with the Prime Minister of one of those countries to suffer the most at the time,
AI is, to a degree, different, as we can program a degree of morality into AI, rather than making sense of chaotic and morally bankrupt markets. The Platonic idea of philosopher-kings represents a sort of technocracy in which the state is run by those with specialist knowledge of the Good, rather than scientific knowledge, and it could be that AI could be empowered to do this. Deciding who determines what is good, however, is a complex issue. For the time being at least, AI has a role to play as a decision-making partner rather than as a decision maker itself. The risks involved in letting AI take over governments are far too great and humans, the majority of whom are devoted to the idea of democracy, are unlikely to acquiesce to a system that could destroy it when many still wouldn’t even get in an automated car. However, the advantages are there, and if it was done in a transparent way that humans were comfortable with, we may well see AI elected to power within our lifetimes.