5 Big Data Implementation Blunders to Dodge

We tell you the 5 things to avoid when implementing a data program


Many have claimed that Big Data is all hype and that the actual benefits it can have to a company is limited. Companies who have effectively adopted Big Data initiatives have shown this to be untrue, and that an efficient use of data and correct systems can have a significant impact on a business’ performance.

So why are there who believe that it is over-rated? We believe it is because they have committed one of these 5 data blunders when trying to adopt a data driven system.

Not Enough Scope

If a company decides to start on a data led path, one important thing to do is not expect the data to work by itself. The reality is that many company will ‘do data science’ and find that they are doing one thing well and another thing badly, but then not go deeper. The attempts to improve failings or build on successes will then be undertaken with gut feeling rather than data, undermining the whole point of having a data programme in place to begin with.

It is important that when findings are shown, that these are fully investigated through data, what does it say about specific successes or failures? This can then be acted on and monitored moving forward. It doesn’t happen straight away and people believe that after perhaps investing heavily in the programme, that it should.

Trying To Do It On The Cheap

Starting a data science programme is much like starting any other programme, in that you get out what you put in. Nothing is more true when it comes to data, as putting unclean data through any machine will produce inaccurate results when reports are pulled.

The same is often true of open source software.

Although a key foundation to data programmes across the world, the reason that companies like Horton Works and Splunk have become so successful is because they have made it better. It is a good place to experiment with data, but in a business context, when something sounds too good to be true, in this case it sometimes is.

No Qualified Employees

Having expensive software and super-computers is useless unless there is somebody who can effectively use it. The best way to describe it is like a kitchen, it might have all the best utensils and oven, but without a good chef nothing good will come out of it.

The people who are working through your data programmes need to be fully qualified and have the ability to collect, identify and communicate findings with the wider organization. If this isn’t the case, all that any data programme will achieve is disappointment.

Moving Too Quickly

Although putting doubts aside and investing heavily can be exciting, it comes with a certain degree of expectation from those who invested money in it. Reaching data maturity takes a certain amount of time, for good reason. The foundation of any analysis should be on data sets that will product accurate results. It requires preparation to create the correct working environment for the analysis to be done well.

Many companies want to start seeing results quickly and when it seems like there are being holdups thanks to historical bad practices, it is the new employees who are looked upon poorly.


One of the key aspects of any data implementation is that it is not a magical formula. You can’t ‘Big Data more money’.

Data allows for elements of the company to be measured and altered accordingly, which can see an improvement in these departments. The improvements come from the optimization of departments rather than wholesale changes from one dataset.

Becoming a data driven company is something that will take time to implement, trying to push everything through straight away at 100mph will result in failure, meaning that those who feel that it is all hype are generally those who were sucked into the hype machine in the first place. 


Read next:

Groupon: What Advice Would You Give To Someone Starting A Career In Big Data?