FOLLOW

FOLLOW

SHARE

Is Elon Musk Right Or Wrong On AI?

Should we highly regulate it or let it grow organically?

17Jul

The march of AI development continues unrelentingly. Every month we seem to get news of some cool new technologies utilizing it to do things that people would have previously thought were impossible. Whether that’s beating a chess champion or driving a car, the acceleration has been extraordinary, but there have been a few who are rightfully worried about its potential for damage.

One of the most vocal has been Elon Musk, who recently came out and publicly stated that we need proactive regulation of AI, claiming ‘by the time we are reactive in AI regulation, it’s too late.’ In his speech to the US National Governors Association, he said ‘Normally the way regulations are set up is when a bunch of bad things happens, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry…It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization.’

The question is, should we make sure that AI is highly regulated? We take a look at both sides of the argument.

Yes, it should be highly regulated

Anybody who watched any of the Terminator films, Blade Runner, the Matrix, Ex Machina, or any of the hundreds of AI focussed films of the past 3 decades will know what some of the most creative writers believed about what AI could do. Whether that’s turning the entire population into batteries without their knowledge or simply killing everyone on earth. These are clearly hyperbolic reactions to AI, but without adequate oversight, some of these outcomes are not an impossibility. As Musk said, it would be considerably easier to be proactive in creating these regulations to prevent something bad happening, rather than reacting to it once it does.

Musk’s idea also holds weight when you consider the struggles that governments and regulatory bodies have had with big data regulations over the past decade. Big data is no longer a new concept, it is something that the majority of medium and large companies now utilize to improve their business performance, but still, the first large-scale international regulations have only really been implemented in the past 2 months. In the lag time, big data has been regulated by insufficient 30-year-old laws which have not been fit for purpose. It has led to several issues, not least data security and abuse of data collection across the world. Through setting the groundwork for regulation early with AI, it would be possible to avoid these issues.

The development of AI is predominantly something that companies are doing themselves, given that the aim of private companies is shareholder value, it will be something used to make money. This often means other people losing their jobs as they are replaced by more efficient and cheaper machines. We have already seen the threat that self-driving vehicles could have for professional drivers. There would be no place for a truck driver who could only drive for 14 hours at any one time compared to a self-driving truck that could drive without needing to stop. Once these kinds of AI innovations take place in an unregulated environment the advantage of early adopters will be huge, forcing almost every company to do the same to compete, meaning the 3.5 million truck drivers in the US could lose their jobs in only a few years. Regulations need to be in place to maximize productivity without having the kind of negative social impact that it has the potential for.

AI is also being developed predominantly by a small number of big tech companies that are each already dominating their industries, according to McKinsey's State Of Machine Learning And AI, 2017, ‘Artificial Intelligence (AI) investment has turned into a race for patents and intellectual property (IP) among the world’s leading tech companies.’ Without regulation AI development could forever be controlled by these companies, setting the agendas of what may or may not be possible, using their incredibly powerful lobbyists to sway governments, and increasing their already huge market share. Regulation can create a more even playing field, allowing a wider range of companies to utilize the technology effectively.

No, regulation should be minimal

AI has huge potential for almost anything imaginable, it has the potential to solve the world’s greatest problems, from global warming through to curing cancer and everything in between. We are currently walking in the shallows of a great ocean of AI opportunity.

At present, as mentioned in McKinsey's State Of Machine Learning And AI, 2017, the vast majority of AI patents are in the hands of only a few companies, which gives limited bandwidth to anything outside of profit making. This means that, ironically, the areas where AI is going to have the most societal impact are going to be limited in what they can initially do with it. It may lead to other organizations needing workarounds and bootstrapping to use these new technologies, which will become increasingly difficult with excessive regulations preventing these organizations from having the impact they could do.

One of the catalysts that allowed big data to grow so quickly was the data community, who could create new innovative open source platforms that helped anybody who wanted to use them, whether they were huge multi-national companies or small local charities. Much of the experimentation took place because it was unregulated, it would be difficult to have something open-source if there were a huge amount of regulation around its use. The foundations of Hadoop came from work being done by Yahoo! at the time, it would be difficult for some of the large tech companies who currently control much of the AI technology to offer the technology they developed for open source if there were a huge amount of regulation around how people can use it. It would potentially leave the company open to legal issues, something that no company wants, so could easily hold back the development of AI.

The most important argument though is that the majority of discussion around AI concentrates on the worst possible outcomes. AI will either kill us all through violently murdering us or through stealing our jobs causing the destruction of society. The reality is that neither of these things will happen because if we don’t like what AI is doing, we can simply turn it off. We won’t all lose our jobs, it makes considerably more sense to make those working more productive after all a society (and hence companies) cannot function with huge unemployment. In terms of killing everybody, we would need to program AI to learn how to kill everybody, which only the maddest and most evil of evil mad scientists would even consider doing that.

Comments

comments powered byDisqus
Dataholes

Read next:

Data Management Made Easy

i