A Potted History of Big Data

A quick look at the history of data collection


Quantifying the world around us has been a critical activity from our prehistoric roots. The earliest recorded tools to count inventory have been carbon dated right back to 18,000 BC and you might imagine hunters counting their kills on their fingers right back in the mists of the early Stone Age.

Somehow we have an innate desire to measure ourselves, to know where we stand and to compare ourselves to others. The abacus was first invented in 2400BC, and Greek versions have been found dating to 600BC. The Greek orator Demosthenes (384 BC–322 BC) talked of the need to use pebbles for calculations too difficult for your head.

Around that time, the great library of Alexandria was founded – to become the greatest “data centre” in the world at the time. The library was in charge of collecting all the world's knowledge - the staff being occupied with the task of translating works onto papyrus paper. It remained a major centre of learning for 300 years until its destruction by the Romans in 30BC.

In terms of complex statistical analyses, one notable example was in 1663, when John Graunt used statistics to track the spread of the bubonic plague – thus playing a part in its eradication.

Many understand the first computer to have been invented by Charles Babbage in 1822 (which remained a concept as it was never completed), but actually its ancient predecessor came 1700 years earlier. The Antikythera Mechanism was an ancient analogue computer for use primarily in astronomy. It was composed of at least 30 bronze “gears” and could perform relatively complex calculations.

Now moving into the modern era, in 1965 the US government planned the world’s first data centre to store, you guessed it…. tax returns. They also put 175 million set of finger prints onto magnetic tape. The Orwellian reality described in his novel 1984 (written in 1949) was becoming to come true. “Big Brother” was starting to discover the tools to watch us.

Since the appearance of (arguably) the first modern computer (the Turing machine) in 1936, capacity for storing data has increased exponentially decade-on-decade, and with the birth of the internet in 1991, sharing data became a global possibility. In 1996, digital data storage officially became more cost effective than paper and in 1997 that repository of everything and anything, Google, came into being.

Hadoop was developed in 2005 – ten years ago, and now there are ever more complex ways of storing unimaginable amounts of data.

In 2010, Google legend Eric Schmitt said that the amount of data being created in two days equated to the amount of data created from the beginning of time to 2003.

Today, the amount of data being generated is amazing… Facebook users like 4,166,667 posts every minute of the day for example. That is what I call a fast moving industry…

I personally feel proud to be contributing in the smallest way to one of the most basic of human desires – to quantify the world around us. Big Data is here to stay because the thirst for data has been around for millennia…

Matt Reaney is the Founder of specialists in Big Data recruitment - connecting innovative organisations with the best talent in Data Analytics and Data Science.

For more Big Data and Data Science related articles visit our website or get in touch via Linkedin or Twitter - @bigcloudteam 


comments powered byDisqus

Read next:

Working At The Boundaries Of Aesthetics And Inference