Films have long trained us to fear machine intelligence. This isn’t the movie industry’s fault, its primary function is to entertain, and the Terminator series would have been pretty banal if humans and robots had just sat in and played Mariokart for eight hours.
When Terminator came out, little was still known about Machine Learning and its implications for AI. This has really changed in the last few years, with the development of Machine Learning’s offshoot, Deep Learning, growing exponentially.
Machine Learning algorithms enable a system to acquire knowledge through a supervised learning experience, with a human being inputting data from which it discovers patterns that it will recognize in the future. So, in the case of images, Machine Learning will be taught what a cat looks like by having a human tell it the image it’s being shown is a cat until it can recognize it without prompting. Deep Learning takes this a step further by eliminating the need for a human to teach it what’s what. It is able to come to its own conclusions about layers of intermediate functions that need to be identified. So, to continue with the cat example, Deep Learning when let loose on a site like Youtube, can analyze millions of videos and determine what a cat is by itself.
Deep Learning began in the 1950’s with the invention of digital neural nets. These roughly simulate the way the human brain learns: When beginning a new task, a certain set of neurons will fire. You then observe the results of the task, and in subsequent trials your brain uses feedback to adjust which neurons get activated.
These were largely forgotten about, with the exception of a few who persevered with their research. In 2006, Geoff Hinton - one of those few - began to organize several layers of artificial neurons so that the entire system could be trained, or even train itself, to divine coherence from random inputs, in much the same way as the human brain learns.
Since then, some of the world’s biggest tech companies have got involved. Google, Microsoft and Facebook have invested millions in research into advanced neural networks and Deep Learning. Google has a particular advantage because it has access to so much data, and has made a land grab into AI that far outstrips the others, with last year’s acquisition of London-based AI outfit DeepMind for a reported $400 million standing out as a game changing move.
The attraction for companies like Google is clear. Deep Learning is a self-perpetuating revenue generator. Not only does it improve the search engine’s functionality when it’s initially implemented, but every time you type a query, click on a search-generated link, or create a link on the web, you are training Google’s AI. The benefits are already being seen through a massive reduction in the search engine’s neural network, which has gone from requiring 1,000 computers to run to just four.
Deep Learning is, as with all technology, neither inherently good nor bad. However, it is not just lunatics in foil hats who are worried that self aware computers could spell danger. The CEO and co-founder of DeepMind himself, Demis Hassabis, has acknowledged that the advanced techniques his own group is pioneering may cause AI to spiral out of human control, and could need to be constrained, while his co-founder, Shane Legg, considers a human extinction due to artificial intelligence the top threat in this century. As a result, contingencies have been put in place. DeepMind investor Elon Musk has just spent $10 million on a study of AI dangers, and Hassabis and his co-founders put in the conditions of Google’s takeover that there be an outside board of advisors to monitor the progress of the company’s AI efforts.
These are sensible. Deep Learning isn't about self-aware machines taking over the world at the moment, but the speed at which technology advances means that it’s difficult to see five years in the future. Wherever it is that we are, however, it is likely Deep Learning will be a driving force.