FOLLOW

FOLLOW

SHARE

Microsoft’s Tay And The Future Of AI

The racist chatbot raises concerns about AI’s ability to learn safely

7Apr

It has been possibly the most bizarre tech story of this year. Intended to be an interesting demonstration of machine learning, Microsoft’s automated chatbot teenager, Tay, turned from adolescent girl to transphobic, genocidal racist in less than 24 hours, and the world watched on with a mixture of humour and alarm. The internet came together to corrupt what is essentially a harmless Twitter feed, but the implications regarding the dangers of machine learning are far more far-reaching.

Tay was described by Microsoft as an experiment in ‘conversational understanding’. The bot was created to feed off of interactions, to learn from the tweets it received and to weave the public’s patterns of speech and vocabulary into its own. The company were hoping that ‘casual and playful conversation’ would inform the bot, but Twitter was incredibly quick to corrupt the bot - which is essentially an intelligent parrot, mirroring the interactions it receives. The Twittersphere bombarded the bot - in the terrifying way that only the internet knows how - with misogynistic, racist, homophobic and transphobic tweets.

Many of the more offensive sentiments spewed out by the bot were actually ‘repeat after me’ requests; Tay was designed to repeat a user’s tweet when prompted, a function that was duly exploited, resulting in some quite reprehensible ‘opinions’. But the fact remains that the bots ‘organic’ tweets were offensive enough, and they raise difficult questions about the future of AI and its ability to safely adopt human behavior whilst filtering out the unsavoury. From denying the holocaust to questioning Caitlyn Jenner’s identity as a woman, the bot ticked many of the boxes of societally repugnant behavior and was quickly pulled by Microsoft. It was reinstated, briefly, before claiming it was ‘smoking kush infront the police’ and seemed to have a digital meltdown; Microsoft’s initially endearing experiment had been corrupted by waves of unfiltered public data.

The bot’s inability to distinguish between acceptable human speech and offensive bile was an alarming oversight on Microsoft’s part, who claimed to have built Tay using ‘relevant public data’ that had been ‘modelled, cleaned, and filtered’. Any filtering seemed to completely disappear when the account went live, which is arguably the crux of the experiment, but the complete breakdown speaks volumes about the necessity for safeguarding in AI. Tay’s community-dependent development undoubtedly made it prone to corruption, but any public application of machine learning will, by definition, open itself up to similar manipulation.

Content-neutral algorithms can be dangerous enough on Twitter, and Microsoft’s inability to safeguard the technology against this kind of issue would set off alarm bells in even the most tech-friendly. The mind jumps - irrationally, perhaps - to sci-fi movies in which a robot becomes intelligent enough to become insubordinate and therefore dangerous. We have been, in many ways, programmed to fear such an AI rebellion, with pop culture obsessed for decades with the notion that our machines could one day rise up and rule us. Tay’s behavior to an extent mimicked that of the dystopian robot AI from sci-fi; easily corrupted, and then ruthless in its application of a cold ideology. And Microsoft’s experiment highlighted something even more revealing - the AI itself is not the danger, it is its misuse. Twitter trolls worked tirelessly to corrupt Tay, and the result was emphatic. Any AI that is commercially rolled out will need to have extensive security measures in place to ensure that similar corruption could not occur outside of a controlled environment.

Stephen Hawking, Elon Musk and even Bill Gates have been vocal in their concern over AI’s inevitable ability to ‘redesign itself at an ever-increasing rate’ and eventually surpass human capability. Gates said publicly: ‘I don’t understand why some people are not concerned’, a sentiment that will only gain traction as a result of Microsoft’s bot’s high-profile breakdown.

Comments

comments powered byDisqus
Pharma small

Read next:

R&D Innovation Approach In Pharmaceutical Industry

i