Speaker Snapshot: 'Organizations Need To Establish A Culture Where One Can Fail'

We spoke to Michael Kubiske, Director, Center for Machine Learning NYC at Capital One


Ahead of his presentation at the Machine Learning Innovation Summit in New York on December 11 & 12, we spoke to Michael Kubiske, Director, Center for Machine Learning NYC at Capital One.

Michael has a wide background in model development, production deployments, economic research, and project management. He joined Capital One as the Director of New York's Center for Machine Learning (C4ML) in 2017. In his spare time, he researches non-standard indicator signals to predict crypto-currency and stock movements on his home cluster. He is an avid cyclist living with his wife and daughter in New York City.

Why do you think we have seen machine learning use increase so dramatically in the past 3 years?

It boils down to two main things: GPUs and data. Machine learning has had its share of fits and starts since coming to the fore in the middle of the 20th Century, but an exponential increase in data, combined with increased processing power/GPU accessibility, has recently made machine learning production more feasible and democratized.

How do you think organizations could be utilizing machine learning better?

The main pillars are data, platform, and culture. First off, having your data ecosystem in the right place is a critical foundation. Similarly, it’s important to have a solid platform that enables you to manage your own GPU execution framework, iterate on models, and access your data quickly and at scale. Finally, to effectively run machine learning production, organizations need to establish a culture where one can fail. Sometimes models don’t work—and there needs to be time and investment to redo a system where needed.

What are the biggest challenges currently facing the further spread of machine learning?

This problem is very different for different industries. Broadly, the labor market is one of the biggest impediments across the board. Companies that are less sophisticated in machine learning may not have the insight to hire the right person for the job, or they may be unwilling to spend the money to hire that talent. Apart from that, explainability in AI (essentially, unpacking the black box that decides how decisions are made—we have a whole work stream dedicated to this), regulatory concerns, and the ability to fully understand networks can all be impediments to further adoption of AI applications.

Do you think that machine learning regulation is currently fit for purpose?

As a bank, our regulations are strict, and rightly so. We are dealing with peoples’ money and their futures, and we expect nothing less than stringent guardrails on that. Sufficient regulation helps provide trust and peace of mind to our customers.

What can the audience expect to take away from your presentation in New York?

I’m aiming to highlight a nascent field of study in AI—explaining the explainable component of neural networks. As more AI-based systems are deployed, it will be about bringing forth more useful information within the technology’s outputs, which we view as not necessarily more important, but better in the long-term.

You can catch Michael's presentation at the Machine Learning Innovation Summit in New York on December 11 & 12

Looking small

Read next:

Expert Insight: 'An Effective Visualization Results From A Great Deal Of Curiosity And Exploration'