Will AI Ever Be Elected?

Many fear AI enslaving humanity, but it is more likely that we will end up voting it in democratically


There has been much debate in recent months about the need for governments to start putting regulations in place to control the rise of AI. Tech luminaries such as Elon Musk have been vocal in their fear that if developers are allowed to continue advancing progress without restraint, the technology could end up destroying humanity. Equally, however, there are those who believe AI could save humanity, with many going so far as to argue that a society governed by AI would be preferable to our current democratic system.

This is not a new idea. Isaac Asimov first discussed the concept of AI in his short stories Evidence and The Evitable Conflict, from the I, Robot. collection. In Asimov’s stories, robots come to the realization that they cannot fully uphold the primary, overriding law of robotics, that ’a robot may not injure a human being or, through inaction, allow a human being to come to harm’. They calculate that the best they can do while upholding this law is to minimize universal suffering, essentially adopting a utilitarian approach whereby pain is selectively distributed in order to prevent a large amount of harm coming to humanity as a whole. The most efficient way of doing this, the robots realize, is by taking charge of government without humanity knowing.

One character, Susan Calvin, explains the logic of this move in The Evitable Conflict, arguing that, ‘our entire technical civilization has created more unhappiness and misery than it has removed. Perhaps an agrarian or pastoral civilization, with less culture and less people would be better. If so, the Machines must move in that direction, preferably without telling us, since in our ignorant prejudices we only know that what we are used to, is good—and we would then fight change. Or perhaps a complete urbanization, or a completely caste-ridden society, or complete anarchy, is the answer. We don't know. Only the Machines know, and they are going there and taking us with them.’

We can already see the seeds of an AI-driven government in place today, with governments increasingly turning to machine learning to automate tasks and even make decisions. Data has been integral to decision making in the UK since John Major was Prime Minister, and this escalated under Tony Blair, who attempted to reduce everything to numbers in order to optimize public services while reducing the reliance on fallible human decision making. The problems with this have been well documented, with many looking to game the systems, but as the amount of data we collect has swollen and the technology to analyze it improved, the reliance on data has not diminished, if anything, it has grown.

Governmental decision making is today more complex than ever. Politicians are having to deal with a constantly shifting geopolitical dynamic, climate change, rising inequality, and a rapidly evolving jobs market, to name just a few. With so much uncertainty, governments are essentially reacting to events on the fly, with limited understanding of the consequences. They need data more than ever, and they need machine learning algorithms to analyze it. But at what point does the government allow it to take over entirely? Is there really a future in which AI takes over the entire function of government, including setting policy?

An AI-run government would presumably function in much the same way that the technocracy movement functioned. In technocracy, politicians and businesspeople were replaced scientists and engineers who had the technical expertise to manage the economy. AI would likely act in place of these experts, planning and setting a rational order in which society specifies its needs and organize the factors of production to achieve them.

The arguments for putting AI in charge are strong. Power corrupts. It has long been humanity’s downfall. If machines have all the power, humanity is freed from an influence that has reaped havoc and sown discord and war since the beginning of time. In a recent article, Scott Beauchamp cited Richard Brautigan’s 1967 poem ‘Watched Over by Machines of Loving Grace’, noting that, ‘what Brautigan longs for in the poem is a utopia without work, where humans are joined back to nature,/returned to our mammal/brothers and sisters’ through ‘cybernetic ecology’ — humans being fundamentally too flawed to be trusted with their own paradise. Unfettered by personality, machines would be rulers without greed, fear, hate, or love, going about the drudgery of administering to human clients free of the disastrous trappings of the ego.It’s a political dream that Brautigan imbues with religious overtones. Not only would machines free us from toiling on the Earth; humans themselves would be transformed and returned to some Rousseauvian state of idyllic primitive bliss., ‘Human being s are fundamentally too flawed to be trusted with their own paradise. Unfettered by personality, machines would be rulers without greed, fear, hate, or love, going about the drudgery of administering to human clients free of the disastrous trappings of the ego. We would be freed from toil, and humans themselves would be transformed and returned to some Rousseauvian state of idyllic primitive bliss.’

This is exceptionally optimistic vision of an AI-run government, and fails to take into account the many issues that would likely arise. For AI to get to the point where it was able to deal with the tremendous complexity of running a country, humanity would have to be fully comfortable with the technology, which is a long way off. What would it mean for democracy? Does it just replace the bureaucracy or the entire electoral system of representatives too? Would a collection of AIs run different systems or would a single one govern a country? Democracy is much vaunted, particularly in the West, and it is difficult to imagine people electing what is essentially an undemocratic machine dictator who may decide that it is in humans best interests that it remain in power indefinitely.

Firstly, all government policy concerns tradeoffs, many of which will result in deaths. Extra money spent on medical research may save lives, but could come at the cost of money needed by the police, which may see others die. These numbers are fogged in unique, minute details and contexts that perhaps a machine is better able to understand, but who tells it where the priorities should lie? Medical research may save more lives, but they may result in lives that humans would not consider worth living. How do we set the norms that the AI is operating under? Society must be judged on how it treats its prisoners and its most vulnerable, and if AI is to adopt a utilitarian approach it may make sense that it did away with many of the progressive values now commonly accepted.

For example, should people be locked away if they have not committed a crime yet, but we can say with 95% statistical certainty that they will? Who sets the level of risk? Should everybody have access to the same health care, regardless of economic status? Should expensive treatments be given to virtuous people over convicted criminals? Who is to determine morality if a crime is unrelated to human life and death? Furthermore, what does it mean for regulation. Does AI outlaw every activity that it deems to be too risky? More people die playing golf than any other sport, primarily due to the age of participants, but would AI ban golf? Human beings could, ultimately, be regulated non-diversity with no choice other than those that would not risk their lives.

In some ways, the idea that human beings would be reluctant to hand over control to AI has already been answered. We have already seen recent governments turn over decision making to the whims of the markets, as Bill Clinton did under the advice of Alan Greenspan. This caused widespread suffering, with the Prime Minister of one of those countries to suffer the most at the time, Mahatir Mohamed of Malaysia saying, ‘Power corrupts. As much as governments can become corrupt when invested with absolute power, markets can also become corrupt when equally absolutely powerful.’ Turning government over to the markets in this way was insidious. We didn’t vote to do so, indeed it went against many of Clinton’s campaign promises.

AI is, to a degree, different, as we can program a degree of morality into AI, rather than making sense of chaotic and morally bankrupt markets. The Platonic idea of philosopher-kings represents a sort of technocracy in which the state is run by those with specialist knowledge of the Good, rather than scientific knowledge, and it could be that AI could be empowered to do this. Deciding who determines what is good, however, is a complex issue. For the time being at least, AI has a role to play as a decision-making partner rather than as a decision maker itself. The risks involved in letting AI take over governments are far too great and humans, the majority of whom are devoted to the idea of democracy, are unlikely to acquiesce to a system that could destroy it when many still wouldn’t even get in an automated car. However, the advantages are there, and if it was done in a transparent way that humans were comfortable with, we may well see AI elected to power within our lifetimes.


comments powered byDisqus
Science small

Read next:

Would You Give Up Your Data for Science?