FOLLOW

FOLLOW

SHARE

Decentralized Analytics Functions Are The Future

Why analytics talent is moving closer to business units

17Aug

According to Gartner, as many as 60% of big data projects started this year will fail to go beyond piloting and experimentation. These failures have serious ramifications, with unsuccessful organizations often understandably dubious about considering another attempt.

The reasons for such failures are many, and vary from organization to organization - the software, the talent hired, external factors beyond anyone's control… all could be responsible to some degree. The project may have even simply been ill conceived from the off.

One common reason for data projects failing is how the analytics team is structured, with the wrong choice putting the project in a position where it cannot produce tangible benefits. This is a difficult problem to solve, as the structure will often depend on a variety of factors and knowing how best to approach it requires a combination of skills rarely found, including experience working with data, strategic nous, and a deep understanding of the business.

Data teams can be either centralized or decentralized. When analytics is centralized, the data and talent to provide analytical expertise and support reside in one group. This group serves a variety of functions and business units and works on diverse projects. The centralized unit is also responsible for setting the analytical direction of the entire organization. A decentralized analytics function, on the other hand, pushes the analytics experts out into the different business functions. Effectively, each unit manages and creates its own data ecosystems independent from the main hub, with analysts tasked to interpret the data from within that team.

Both options have their merits. The main advantage of having a centralized data analytics team is that there is greater standardization for reporting across business units and enforcement of data governance policies is significantly more practical. It prevents metrics from being calculated differently across the organization, thereby ensuring continuity of insights so that other divisions can be confident of using them alongside their own data knowing that you are all working from the same playbook. It also makes it much easier to deploy analysts to projects with strategic priority, which is more challenging for the analysts themselves and provides the opportunity to learn more about the organization.

Equally, there are also a number of disadvantages to centralization. For one, it removes a degree of flexibility. With a decentralized approach, each unit is able to localize, process and analyze data in an agile manner. It is easier and quicker for users to access the data and they can do it without having to go through a gatekeeper. This allows them to experiment more with it and improve their general understanding of data so they can grow to ask better questions of it. Furthermore, there can be full integration of analytics with the project right from its conception. As Piyanka Jain notes in Forbes, ‘The alignment of purpose this creates, produces very non-linear synergistic effects with respect to the value derived from analytics.’

A decentralized analytics function is, traditionally, far harder to implement. Department-wide cooperation on data initiatives is usually required, and this often requires a culture shift if the department has not traditionally been data-driven. Unsurprisingly, this can be a real challenge, but in theory, it shouldn’t be. Data is tremendously powerful, and when people start to recognize that and understand how it can help them personally, they hop on board pretty quickly. You need to demonstrate exactly how data can make each function within the department more effective and more efficient. It almost always can, and if you are able to communicate that to key stakeholders, you can generate real excitement.

Decentralization is getting increasingly easy. Machine learning and advanced statistical algorithms are now capable of doing the ground work in preparing the data, while automated data visualizations provide the user with insights on a silver platter. Even those who can’t read graphs can get it in text with automated Natural Language Generation software such as Quill. Companies are also growing in maturity in their data efforts, with business users at the stage where they are capable of handling the data with little trouble when required. Over the next year, we will likely see even more companies away from centralization. However, they need to be wary that such a structure may not be right if the talent and tools are not in place, and ensure that they are before any commitments are made. 

Comments

comments powered byDisqus
Clearglass

Read next:

Achieving Cost Transparency

i