In 1950, the world’s 10 largest corporations were providers of physical goods. Today, only one still makes the majority of its profit from its physical products. The rest owe their success to the monetization of the data they collect. The two clearest examples of this are Facebook and Google, which monetize users’ data by very precisely targeting them with advertisements. However, data alone is useless and meaningless, unless it can be examined and learned from. Data needs to be analyzed and then organized in order for it to help inform decisions and be acted upon.
The profitability of data analysis has led to the explosive growth of cloud computing and data centers. IBM projects that by 2020 there will be 75 billion internet-connected devices, generating 40 zettabytes (1ZB=1,126,000,000,000,000,000 bytes) of data per year. By 2025, that number will quadruple. More than 80% of this data will have to remain unanalyzed because data centers’ analysis capabilities cannot keep up with the massive rate of data generation.
The first reason data centers can’t keep up is due to a fundamental problem of physics. Running a data center requires massive amounts of power. Therefore, the most fundamental limit of a data center is the amount of electricity that can be safely routed into it. This limit has already been reached, and there are few innovations on the horizon to deal with this.
The second limitation of data centers is their inherent economic inefficiency. A data centers’ infrastructure, which includes land, building, maintenance, employees and air conditioning accounts for 80% of a data center’s operating costs. This is a steep overhead for someone who is only interested in using the computing power.
Visit Innovation Enterprise's Digital Marketing & Strategy Innovation Summit in Shanghai on September 5–6, 2018
Even with these limitations and inefficiencies, cloud computers are the go-to option for interpreting large amounts of data, because currently, they are the only option. Since cloud data centers are a $500bn industry, this makes it ripe for disruption.
There are two competing visions of the future of data analysis, both of which completely scrap the centralized data center model. The first vision imagines quantum computing as the next big step forward in data analysis. Startups, such as Rigetti Computing, have already raised over $2m to push forward quantum development. However, this is a very new technology, and no one is sure if quantum computing will ever be more cost-effective than traditional computers. The most advanced quantum computer that currently exists, built by IBM, is capable of playing a game of Battleship.
The much more likely data center disruption scenario comes from startups like Hypernetwork.io. Hypernetwork.io has created a brand new programming model which enables personal computers to work with one another in-sync, in a way that enables data analysis without data collection and centralization.
Believe it or not, iPhone 10s sold in 1Q 18, contain three times more cores than all of Amazon Web Services’ cloud data centers worldwide – the largest cloud service provider. Given that nearly all of the available computation power is lying dormant in people’s personal devices while they’re charging, sleeping and being carried around in pockets, Hypernetwork.io’s solution pays device owners to utilize the enormous amount of power that already exists in their devices, but is currently unused.
Regardless of which solution is ultimately adopted, the global importance of data analysis ensures that a new technology will arise to overcome the limitations of data centers, and the one who develops that solution will inevitably usher in a new computational renaissance.