FOLLOW

FOLLOW

SHARE

Will DeepMind's Ethics and Society Unit Work?

It's a good idea, but is it something that could be effective?

5Oct

Much has been made of the future impact of AI on our society. We've had newspapers writing headlines like 'Robots will take a third of British jobs by 2030, report says' and 'Rise of robots taking jobs to be 'painful and enduring'' which has stirred up enough negative sentiment amongst society that 70% of Americans fear that robots could take over our lives, according to a report from Pew. Even Elon Musk, a man widely regarded as one of the most forward thinking men of our generation, famously said that AI could lead to a third world war.

One of the the elements that has stoked this negativity towards AI is that the vast majority of people do not understand it, and with governments still getting to grips with legislation for data collection that was possible 20 years ago, there is little hope that they will be able to control the worst possibilities of future AI development. We know that AI development is accelerating and that the capabilities of it today are already putting livelihoods at risk. For instance, Google recently launched the Pixel 2 phone and Google Buds, which uses AI to translate conversations in real time, if this becomes more widely available, suddenly there will be little need for translators. Companies like Google, Ford, and Uber have already racked up millions of miles for their self-driving cars and the threat this poses to haulage drivers and taxi drivers is huge. The issue that these groups have is that there is little basic understanding amongst governments who are meant to represent them and little sympathy from companies trying to bring the first autonomous vehicles to market.

However, DeepMind have taken up this mantle and recently announced their Ethics and Society Unit which will look into some of the ethical questions surrounding AI moving forward. The company famously demanded that Google create an ethics committee as part of the £400m deal to buy the company in 2014, but that committee has been shrouded in secrecy, to the extent that people are unsure of whether they have even convened since it was announced in January 2016. Instead, this unit aims to provide transparency on a subject that many people have questions about.

Within the unit, DeepMind has initially included 8 permanent DeepMind employees and six external unpaid members, with this 14 person group expanding to 25 in the next 12 months. The idea behind this mix is to quell fears surrounding Google's often creepy use of data, which seems very cloak and dagger to many given the lack of knowledge of what's held and why. At present, the external 'fellows', as they are being called, include Columbia development professor Jeffrey Sachs, Oxford AI professor Nick Bostrom, and climate change campaigner Christiana Figueres.

According to Mustafa Suleyman, co-founder of DeepMind, 'We’re going to be collaborating with all kinds of think tanks and academics. I think it’s exciting to be a company that is putting sensitive issues, proactively, up-front, on the table, for public discussion.' When we consider the biggest challenges that AI faces, this seems like the logical way of approaching the situation, taking the best minds in the field and putting them to work solving the issues that will arise from the use of AI in the future.

However, if the past two years have taught us anything, it's that logic and reason are often thrown out the window when people are operating from a point a fear and especially when it involves their financial interests and personal safety. In the UK Michael Gove famously said 'People in this country have had enough of experts' in the face of criticism from every major international organization, trade union leader, and respected academic about his stance on Brexit. In the US, the majority of major political figures, including every living former president, 240 newspapers, and the majority of foreign leaders warned against voting for Donald Trump. Fear and emotion trumped logic and research in both cases with Donald Trump winning the presidency and the UK voting to leave the EU.

The subject of AI is one that fills people with fear. There have been hundreds of articles written scaring people about how they will lose their jobs or see the world destroyed by robots. There are studies that show the opposite, that AI will actually create more jobs, such as the Gartner study that claims that although 1.8 million jobs will be lost due to AI before 2020, but in that same time 2.3 million will be created. The issue is that it is not headline grabbing or emotional, so it has had little coverage compared to the far scarier 'robots will steal your jobs and kill you' headlines.

Logic, academia, and study is what we should be looking at when it comes to setting the agenda for AI, but with the best will in the world, having a company linked to Google, seen as the boogeyman of data, and academics who few outside of their fields have heard, will do little to assuage the fears of the population. Hopefully the transparency of this group will help, but the reality of what this group is likely to find is that they will need to cut through a huge amount to get positive and informed messages out. It means that the unit needs to spend as much time thinking about PR as it does study, which is something that tech companies have unfortunately never been especially good at. 

Comments

comments powered byDisqus
Data disperse small

Read next:

The Russia Facebook Scandal Could Reshape How We Share Data

i