Big data has had a considerable amount of mud thrown at it over the past few months, from the loss of data from some of the world’s largest companies through to it being blamed for the undermining of the democratic process. One thing that has been a constant is that people don’t mind having their data used as long as it is beneficial to them and it is kept as safely stored as physically possible. However, new developments may allow data to become more transparent for the people who’s records are held.
One of the most exciting developments comes from DeepMind, who have recently announced their ‘Verifiable Data Audit’ technology for the health data they are currently working with.
It has been extensively covered that the company is working with NHS institutions using their machine learning and AI technologies to help improve how they diagnose and treat patients. However, one of the big concerns has always been that people do not like a company to have access to their most sensitive information, especially one owned by Google, who are already seen as holding too much data on people. It is something that DeepMind hope to solve, not just for their partner data, but potentially across multiple different data types and for a wide variety of industries.
DeepMind co-founder Mustafa Suleyman described why this is so important for the company - ‘Our mission is absolutely central, and a core part of that is figuring out how we can do a better job of building trust. Transparency and better control of data is what will build trust in the long term’
The idea is to essentially create a system similar to blockchain, but one that doesn’t require decentralised authentication of the chain, instead it will be authenticated by the hospitals and health institutions themselves. The reasons for this are multiple, from the power consumption required, through to the fact that most of the data being authenticated is often incredibly sensitive. Also, instead of using a traditional chain architecture, DeepMind have adopted a tree formation, which allows for this institutional verification.
Once this kind of system is in place, it will allow hospital administrators to see which pieces of data have been accessed and what they were used for on an individual level. This can then be checked by the person who created the data in the first place, allowing complete transparency about why the data was used and the outcome of its use, all verified and uneditable.
It is a use for the technology that has some real potential beyond DeepMind’s work in healthcare, and could be something that is easily transferred across almost any discipline that people trust enough to validate what their data was used for. For instance, many companies store communications from their employees for purposes of safety and legality, but there is often a deep mistrust over this, given the implications that some of it could have, especially if it is taken out of context. Having the ability to see who has accessed data and why, gives employees peace of mind about why their data is being used and that it is being used in the right way. The technology has huge potential across almost every fathomable industry, even the intelligence services, who are worried about unauthorized access and leaking, like we have recently seen from the NSA and CIA.
DeepMind are doing some truly exceptional work across many areas, not least in healthcare, but it may ultimately be this, which has very little to do with machine learning or AI that is their biggest impact. At present this system has simply been suggested, but if it does work, it could fundamentally change the way sensitive data is accessed whilst increasing transparency and trust for both organizations and users.