Given the speed at which the inexorable drive for technological progress moves, it is inevitable that human understanding of why something is so effective and how best to use it will sometimes be left playing catch up. We currently find ourselves in this situation when it comes to big data, with most organizations and governments appreciating the value that it holds and seeing the benefits on a daily basis, but lacking understanding of how it works and how to really use it to its potential. This discrepancy has created a strange world in which we are reliant on data, but lack any real control over its power.
The sheer volume of data produced by companies in the digital age means that analysis can often be a daunting proposition. There is an understandable temptation to leave technology more-or-less to its own devices, and take any insights it garners as gospel that require no further interrogation. Finding individuals who have the ability to analyze data is difficult, and people are prone to making mistakes. Relying on technology to make decisions also means that people don’t have to take the blame when things go wrong.
At the moment, however, caution should still be exercised around relying exclusively on technology to make decisions. Machines have limits, and there is a body of evidence to suggest that it is at its most effective when used as part of a collaborative effort with humans. They are tremendously good at spotting patterns in data, for example, but they struggle when it comes to finding out why something is happening, and need the context that only humans can provide. They cannot understand or explain the emotional response somebody has towards a product or situation, and cannot account for the unpredictability of human nature. The story many point to when describing the power of machines to make decisions is the IBM super computer that defeated chess grandmaster Garry Kasparov in 1997. Many, however, don’t know that two amateur chess players teamed up with three personal computers in 2005 to win $20,000 in a chess tournament against a field of supercomputers and grandmasters. As Kasparov himself noted in a review of a book about AI and the human mind, ‘human strategic guidance combined with the tactical acuity of a computer was overwhelming.’
Selecting which features of datasets to analyze has always required human intuition, but this is beginning to change thanks to advances in automated machine learning technology. A new MIT data analytics system has already seen considerable success. A prototype of the system was enrolled in three data science competitions, competing against human teams to find predictive patterns in unfamiliar data sets. Of the 906 teams participating in the three competitions, the researchers’ ‘Data Science Machine’ finished ahead of 615. It also achieved this in a fraction of the time it took the human teams that it was competing against, gathering insights in a matter of hours as opposed to months. This system is only going to become more accurate as it is further refined.
Automated machine learning technology shows great promise for removing the human element from analytics, but it is unlikely that MIT’s technology will render the role obsolete. And nor should it. Speed is not always necessarily the best quality in decision making, and the importance of restraint should not be overlooked. Humans have a unique ability to observe and synthesize, and will always play a part in decision making. As technology increasingly takes over the burden of decision making, it is vital that people understand how it works and its limitations.