FOLLOW

FOLLOW

SHARE

Democratizing AI Is Not A Silver Bullet

Open source is proving popular in AI development, but is it emblematic of a larger problem?

10Apr

As with any new technology, there are many implications we have yet to really consider around AI. For example, what would the West’s response be if, say, North Korea managed to develop a super intelligence first? Will we be able to imbue AI with an ethical system? And how boring is GTA going to be when everyone in the game is in a driverless car?

Such side effects seem to have - at least for the time being - been put on the back burner. Tech giants are focused solely on the tremendous potential AI has for improving the world, from driving productivity to helping discover new medicines. We have seen huge sums of money pumped into research by the likes of Amazon, Google, Microsoft, and IBM, and they have made huge leaps forward. In their quest for AI to develop as quickly as possible, these companies are also now looking to open source their technology, making it freely available to developers. However, while their efforts seem to be altruistic and focused on creating a better future for everyone, it is also emblematic of a worrying culture in AI development that could prove disastrous.

Calls to democratize AI have come from leaders from every leading tech companies. Microsoft CEO Satya Nadella recently wrote that he wanted AI ‘in the hands of every developer, every organization, every public sector organization around the world’ to allow them to build their own intelligence and AI capability. Fei-Fei Li, newly hired chief scientist of artificial intelligence and machine learning at Google Cloud, agrees, stating ‘The next step for AI must be democratization. This means lowering the barriers of entry, and making it available to the largest possible community of developers, users and enterprises.’ Elon Musk, never one to be outdone, goes even further, essentially saying it is the key to preventing an artificial intelligence-induced apocalypse; ‘If AI power is broadly distributed to the degree that we can link AI power to each individual's will - you would have your AI agent, everybody would have their AI agent - then if somebody did try to something really terrible, then the collective will of others could overcome that bad actor.’

These are big words from important people, and while talk is cheap, they have been backed up by action. Google has TensorFlow, the open source set of machine learning libraries that Google open sourced in 2015, while Amazon has made its Deep Scalable Sparse Tensor Network Engine (DSSTNE - pronounced ‘Destiny’) library available on GitHub under the Apache 2.0 license. Elon Musk’s OpenAI bills itself as a ‘non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.’ They attract developers with the pitch that they will get the opportunity to explore research aimed solely at the future instead of products and quarterly earnings, and to eventually share most - if not all - of this research with anyone who wants it.

There are a number of persuasive reasons that democratizing AI makes sense. Firstly, in terms of improving the technology, research cannot be done in the shadows and will greatly benefit from having more people engaged in research. Secondly, it should help to distribute the profits from AI. The impact on the jobs market is among the most cited drawbacks of AI, with roughly 50% of jobs likely to be automated by 2030 according to some estimates. Much of the value created by AI will accrue to those who develop it, which would lead to massive wealth inequality. François Chollet, Deep learning researcher at Google, argues that, ‘One way to counter-balance this is to make value creation through AI as broadly available as possible, thus making economic control more distributed and preventing a potentially dangerous centralization of power. If everyone can use AI to solve the problems that they have, then AI becomes a tool that empowers individuals. If using AI requires contracting a specialized company (that will most likely own your data), then AI becomes a tool for the centralization and consolidation of power.’

The problem is, however, that - with the exception of Elon Musk, who has been vocal on the issue - the tech giants seem to see open source really as just another way to develop AI as quickly as possible. When OpenAI was started, OpenAI employees reported getting crazy offers from other companies for their services. According to Wired, Wojciech Zaremba, a researcher who was joining OpenAI, said he felt the money was at least as much of an effort to prevent the creation of OpenAI as a play to win his services, which appears to indicate that they are less interested in seeing the technology develop than they are in seeing their technology develop.

We are now developing AI under race conditions. Google is racing against Facebook, the US is racing against Russia - everyone wants to be first to bring the latest iteration of super machine intelligence to market for fear that to lose out to a competitor would mean their destruction. On top of this, we are dealing with a technology that will be able to teach itself at a speed far beyond our comprehension. Electronic circuits are a billion times fast than human intelligence, and supercomputers will be able to make thousands of years of progress in a day. The huge consequences of AI will likely mean we need a whole new political and economic system to cope with the consequences. This means that we are not giving ourselves time to set up a system and infrastructure that is capable of dealing with it and solve the very real problems that AI has the potential to cause - an environment where we can say we have to share everything, we have to share the wealth, we have to share the information, we have to open source everything.

Open source is important in that it helps to prevent this, but, conversely, it is also part of the dangerous mentality that is likely to cause these issues. While open sourcing technology seems sensible, it is not solving the issues at the heart of AI. If anything we need to slow down. There needs to be a set of design principles and guidelines discussed amongst the tech industry and society. In AI safety, the time left to solve the problem is a crucial factor, and many still seem to believe it is a problem that is down the road for someone else to deal with. Developers should readily access the knowledge and tools they need to realize their full potential, but the more pressing problem at the moment is that we are rushing, and this needs to stop first.

Comments

comments powered byDisqus
Turkeyss

Read next:

What’s On Data Analysts’ Plates This Thanksgiving

i