The news that DeepMind, the Google-owned artificial intelligence company, is being provided with patient information by the Royal Free NHS trust (RFT) in the UK has drawn criticism from all sides. It has again raised the old argument about how much data governments and corporations should be able to access without permission if they’re using it for the ‘public good’, and how much they can be trusted to use it for its intended purpose.
A spokesman for RFT said: ‘The RFT approached DeepMind with the aim of developing an app that improves the detection of AKI (Acute Kidney Injury) by immediately reviewing blood test results for signs of deterioration and sending an alert and the results to the most appropriate clinician via a dedicated handheld device.’
However, it appears that the data being provided is not specifically related to AKI. According to documents revealed by The New Scientist, the data being provided includes information on people who are HIV-positive and patients who have gone through abortions and drug overdoses. It also includes logs of day-to-day hospital activity - records of the location and status of patients, and who visits them and when. The hospitals will also share the results of certain pathology and radiology tests.
Sam Smith, of the health data privacy group MedConfidential, told the New Scientist, ‘What DeepMind is trying to do is build a generic algorithm that can do this for anything – anything you can do a test for. The big question is why they want it. This is a very rich data set. If you are someone who went to the A&E department, why is your data in this?’
The RFT say that they have applied their standard data sharing agreement, which is in line with the legislation and policy requirements as published by the regulators. However, in order to give Google access to the healthcare data of almost 1.6 million patients, the NHS had to exploit a loophole around ‘implied consent’, which states that they do not require patient consent if the data is being used for direct care. They include the analysis of information for research, commissioning or payment of service providers and auditing of services. Obviously, there is some ambiguity around exactly what constitutes direct care, and there are fears that Google could extend the definition as it pleases. Indeed, it is hard to see how records of patients’ visitors could be considered direct care, and skepticism around what it would be used for. Indirect care supposedly includes the analysis of information for research, commissioning or payment of service providers and auditing of services, and you would surely imagine that what DeepMind is doing constitutes analysis for research.
There are safeguards in place, but worries persist. As with all information sharing agreements between the NHS and external organizations, patients are able to opt out of any data-sharing system by contacting the trust’s data protection officer. However, there are concerns around how easy this is, and many will be unaware that their data is being shared in the first place. The agreement also states that Google is not allowed to use the data in any other section of its global business, and the data is encrypted so as to be ‘totally unreadable’. The patient data would also be stored through a third party contractor in the UK, and it will not be saved in DeepMind's offices or servers. Finally, DeepMind is also required to delete the data once the project has been finished and the agreement with the trust expires by the end of September 2017.
These safeguards should, really, be enough for most people to believe that Google will not use the data for anything more than its intended purpose. But people remain skeptical. The tech giant’s primary revenue stream is advertising and personal data, and this data is obviously a treasure trove for them. Critics have also attacked the NHS for making information-sharing decisions on our behalf, and there is a fear that current regulations are too weak, and not being obeyed anyway.
The question is whether or not it’s worth trusting Google with the information. When you sign up to a newsletter, you give over your details knowing that the company will send you offers in the hope you will buy something, and in exchange you get a newsletter. There is always a trade off when you share personal information, and people need to feel like it’s worth it. In this case, DeepMind want to use the data to support doctors by making predictions based on datasets too broad and large for individuals to compute. DeepMind is attempting to build a generic algorithm that can make predictions about any disease it wants, and hopefully catch them ahead of time, before any symptoms have even appeared. You would think this would be worth the trade off.
If even one life is saved, then surely it’s worth people feeling as if their privacy has been invaded. However, mistakes have been made in the lack transparency, and seeming to take the data without people’s knowledge. The co-founder of the project, Mustafa Suleyman, told The Guardian: ‘As Googlers, we have the very best privacy and secure infrastructure for managing the most sensitive data in the world. That's something we're able to draw upon as we're such a core part of Google.’ Healthcare needs data, particularly the NHS, struggling as it is for funds. It is currently woefully behind in adoption of technology. Much is still done on paper, wasting thousands of man hours transporting notes and files to different parts of hospitals. The problems, however, arise when data is not collected transparently, and when the balance of who benefits from the data’s use favors the company.