Big Data Acceleration Hitting New Heights

With in-memory systems becoming more common, speeds are increasing


One of the aspects of Big Data that companies like to gloat about is the size of their databases. To many, this is an indication of how much insight a company has on their customers, but the reality is that it simply means that the amount of data the company has is large. It does not necessarily mean that it holds meaningful information about its customers.

This is because until data is processed it is like seeds in a bag; without the soil in which to grow, you have only a potential crop.

With a huge database of analyzable data, the chances are that by the time it is analyzed, much of it will be out of date. This lack of relevance will have a profound effect on the quality of the analysis too, after all, if you put garbage in, you get garbage out.

To make this more complex, there are challenges that come with the speed in which data is collected, as the speed in which it is collected can put significant pressure on hardware to process it effectively.

With both of these issues in mind, the use of in-memory systems has slowly been increasing. The idea of this is that rather than using the traditional disk storage system, the data is held within the main memory of the computer. It allows significant speed and performance improvements over traditional disk based data systems and even solid state systems, sometimes to a factor of 1000 times faster.

These systems are not new, but have been brought to the forefront of business thought thanks to the well established business cases for them as companies collect more data and need the analysis faster than ever. Software like Spark allows companies to use the technology within this landscape and we are increasingly seeing companies offering the hardware that will allow it to run efficiently.

What we are seeing today is the start of a price and speed war between some of the world’s largest hardware providers. For instance, Intel and Micron have recently announced their 3D XPoint memory technology, Oracle have their ‘Exalytics In-Memory Machine X4-4’ and SAS have several in-memory systems available.

Although the call for in-memory is presently relatively small, with an increasing amount of data being collected all the time, there is likely to be an increase in demand soon. These companies are attempting to pre-empt this, and with more businesses adopting the technology and more hardware providers offering solutions, the price is likely to decrease.

At present this, combined with many companies being in the infancy of their data collection efforts, are two of the main reasons for the slow uptake. Once data programmes become more mature and the prices of this new technology decreases alongside it, the uptake of in-memory and the acceleration of data speeds is only going to increase.


Read next:

Working At The Boundaries Of Aesthetics And Inference