Autodesk Analytics Pipeline: Kafka-Splunk-Hadoop

Autodesk is making the transition from a traditional boxed software company to the cloud. A key part of this transition is the development of a scalable analytics pipeline and the ability to measure active engagement with the broad capabilities of our portfolio of offerings across desktop, web, and mobile platforms.  In this session we will discuss the care and feeding our 800TBs data lake that is currently growing by more than 100GBs/day. In addition, we will discuss our Analytics SWAT TEAM, a cross departmental group helping to onboard products and services within our highly matrixed organization to the analytics pipeline. We will present the results of several internal product pilots as we optimize our ability to provide powerful tools to access the data lake for both technical and non-technical data consumers.

Charlie Crocker
Business Analytics Program Lead
Charlie is a data geek with 20 years’ experience bringing data out of the shadows to drive business value and optimize operational costs. For Autodesk he is currently working across divisions to identify and validate potential reliable data sources and access mechanisms. This includes getting data into Splunk, Hadoop, and Google Big Query, delivering real-time analytics data to stakeholders via Tableau and Geckoboard. Prior to Autodesk he was a partner in a geographic information systems start-up into with a focus on spatial databases, Web-based distribution of spatial data, and the integration of CAD and GIS for state and local government agencies and utility companies.

Interested in more ondemand presentations?

Subscribe to ieOnDemand
University lecture small

Read next:

How Are Higher Education Institutions Using Analytics?