Medical diagnosis is one of AI's most important challenges. Tech giants including the likes of Microsoft and IBM are already ploughing significant resources into the field, researching and developing technologies that can analyze data and images in ways that could prove vital in early diagnosis and the design of new medication regimes. And the potential is enormous. In one recent study of a year’s worth of hospital admissions, the US Agency for Healthcare Research and Quality (AHRQ) estimated that machine learning could have prevented 4.4 million hospital admissions in the United States, totaling $30.8 billion in saved costs.
There are, however, still significant obstacles, particularly in overcoming public perception. In a recent YouGov poll, while 45% of people surveyed said they believed AI should be used for disease diagnosis - the second highest level of willingness to see AI applied in an area after gathering police intelligence - 34% still believe that it should not be used for such purposes.
This is hard to understand, as there is some excellent work being done. CBI insights has identified 22 companies developing new programs for imaging and diagnostics, and this number is growing rapidly. One example is the research being done into Parkinson's disease. Somewhere between seven and 10 million people worldwide are living with Parkinson's. It is the second most common age-related neurodegenerative disorder after Alzheimer’s disease and the combined direct and indirect costs of Parkinson’s in the United States, including treatment, disability, and similar payments, plus lost income from an inability to work, are estimated to be some $25 billion per year.
It is a problem that Massachusetts-based data analytics company GNS Healthcare has turned its attention to, developing a causal machine-learning platform that promises to improve our understanding of the disease - specifically the rate of progression of the neurodegenerative condition. Using data from the Michael J. Fox Foundation-sponsored Parkinson's Progression Markers Initiative, GNS worked alongside researchers from Cambridge and the University of Rochester to pinpoint genetic and molecular markers of rapid motor progression of Parkinson's. The work confirmed the role of one biomarker that had a known association with Parkinson's disease and also discovered a novel predictor.
The researchers came to their discoveries by running PPMI data and other clinical information through GNS Healthcare's causal machine-learning platform called REFS, which stands for Reverse Engineering and Forward Simulation (REFS). Iya Khalil, cofounder and chief commercial officer of GNS Healthcare, explained that: 'It [REFS] reverse-engineers or learns from the data. It reconstructs that causal mechanism that gave rise to that data, the causal biological mechanism that explains why we're seeing patients progress, [which] patients would be better off with other treatments, observing causality. It's not just extracting a pattern of the data, once we've learned these models, we can run simulations and run different scenarios to test different drugs or different gene perturbations. These analyses around the simulation that can happen on the computer very quickly can lead to much more powerful hypotheses that we can then give to clinicians to inform their decisions, or to researchers on the biological side.'
This is just one of many examples. The Alzheimer's Disease Neuroimaging Initiative (ADNI) has developed a machine algorithm that uses protein biomarkers to identify Alzheimer’s disease. It can accurately identify imaging studies of patients progressing into dementia 84% of the time. Researchers are also combining next-generation RNA sequencing with an advanced analytics algorithm to create a highly accurate, non-invasive diagnostic test for ovarian cancer, a leading cause of cancer deaths among women. This is particularly important because a woman’s survival often depends on doctors detecting the tumor before it has spread beyond the ovary. AI can even be applied in emergency rooms. Beth Israel Deaconess Medical Center in Boston, for one, has applied machine learning algorithms to workflow processes to enable medical staff to better capture patients' ‘chief complaints’ on arrival. Steven Horng, an emergency physician and computer programmer at Beth Israel Deaconess Medical Center in Boston, noted that: ‘Being able to capture chest pain as a discreet entity can be very valuable downstream for clinical care and in launching things like order sets and clinical pathways.’
The idea that people are so concerned about where AI is going that they are too scared to see the technology applied in such important ways is extremely worrying. People have many concerns around AI. There is the concern that the data machines learns from comes with inherent biases that are exacerbated during the learning process. There are fears that AI will eliminate jobs and not replace them. There is the rise of autonomous weapons and the risks around the technology falling into the wrong hands. These are justified and people are right to keep them in mind. However, there are many examples such as medical diagnosis where the benefits are surely so significant that such concerns should not impact people's view on whether the technology should be used in such a way. Indeed, the proliferation of such fears is ultimately putting lives at risk.
There are several solutions to this problem. Firstly, there needs to better education and more publicity for AI that's applied for social good. This is difficult to control as good news so rarely makes the front pages and friendly robots do not make for successful Hollywood blockbusters. We also need to understand that fear of AI isn’t the preserve of tin foil hat simpletons. Even the likes of Stephen Hawking and Tesla founder Elon Musk have urged caution, recently saying publicly that we need proactive regulation of AI. Musk claimed that ‘by the time we are reactive in AI regulation, it’s too late. Normally the way regulations are set up is when a bunch of bad things happens, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry… It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization.’ There needs to be an open debate that involves everyone and there needs to be complete transparency with how AI is progressing. That means the black box culture needs to make way for one that is more open.
Central to this, as Musk mentioned, is increased regulation. Regulation is key to the public perception of AI. People need to know that governments understand the technology and can exercise some degree of control over it as it evolves. Currently - perhaps correctly - it is seen as something likely to grow outside the realm of human control, and this leads to fear due to its unpredictability. Greater regulation is also something that has huge public support. In a recent Quartz survey, a staggering 84% thought AI should be regulated, while only 3% were opposed to the idea. Another danger that regulation could prevent is that of AI being associated only with its worst applications, for example, autonomous weapons. The same thing happened with nuclear energy, in that it became associated with the atom bomb, dramatically hindering its progress in areas where it could have some benefit. Many in the UN are already pushing to ban AI in the same way chemical weapons are banned across the world, but public pressure is also needed.
Ultimately, if 34% of people do not even believe AI should be used to diagnose diseases, it is clear that we have a severe PR problem. While a degree of wariness is sensible, too much will hamper progress in areas where it is desperately needed. Governments need to ensure that they are in control of AI and that the good it can do is well publicized, or the consequences could be unimaginable.