FOLLOW

FOLLOW

SHARE

Gender In AI - Why It Matters

A gender-neutral chatbot raises the question: why are assistive AI always female?

24Aug

In Spike Jonze’s 2013 drama Her, Joaquin Phoenix’s Theodore falls deeply in love with his new operating system ‘Samantha’ - an AI that develops its own personality. At the heart of the movie’s drama is the problematic notion that the AI is entirely subservient whilst ostensibly being part of an organically developed relationship. The movie’s central premise, satirical as it is, is visible today in flirtatious interactions with developing forms of AI like Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa.

The concept of the ‘virtual assistant’ is developing, from satnavs to voice activated speakers, and the defining characteristic across the board? They’re almost always female. Jacqueline Feldman - an AI UX designer - is currently programming a decidedly gender-neutral intelligent banking AI, that avoids the often uncomfortably provocative responses programmed by the likes of Apple in an attempt to make an AI more engaging.

‘I tried to avoid traditionally feminine characteristics that are hallmarks of other artificially intelligent personalities, whether that’s over apologising or a certain kind of sass,’ Feldman told Engadget. ‘These personalities will have a kind of humor that is either very self deprecating or else a bit flirty - very gendered. It’s unmistakably ‘feminine’ in the traditional sense of the term.’

Feldman’s AI, Kai, won’t respond to flirtation or insults; it’s strictly business, a decision that raises the question of why some conversations are programmed to be handled by AI at all. Flirtatious jokes are made by both Siri and Cortana as a response to questions which, if asked to real women, would be considered intrusive, unwelcome or insulting. The fact that Cortana is based on a semi-nude videogame character - that appears in a game attributed to Microsoft Studios - should tell us enough about the image of femininity all too often programmed into AI.

And so we reach the point at which Jamie Foxx appears in an Apple advert, where Siri adjudges him to be attractive - and tells him as much - before he exclaims ‘You’ve got a crush one me!’ The mind jumps back to Ms Dewey, Microsoft’s ill-advised 2006 project featuring actress Janina Gavankar as an early virtual assistant on a search engine. Her often campy and funny performance was undermined by the responses to flirtatious or offensive search terms, engagement that seems entirely unnecessary. These elements of AI serve no function.

In fact, you won’t find a major digital assistant with female characteristics that doesn’t have some flirtation programmed into it, and this is a problem. Stereotypical responses to prompts promote gender stereotypes. Veronica Belmont, writing for Chatbots Magazine, summarises the issue well: ‘Gendering artificial intelligence makes it easier for us to relate to them, but has the unfortunate consequence of reinforcing gender stereotypes. To go back to Microsoft’s Ms Dewey, research from Miriam E. Sweeney ‘observed that a user ordered ‘You Strip’ to Ms. Dewey three times, each time prompting a more compliant response from the virtual assistant,’ according to The Establishment. This subtle reinforcement of a ’no means yes’ premise is as dangerous as it is unnecessary; no response is good enough.

The very nature of an AI’s existence necessitates it being accommodating, non-confrontational - both deferential and reverent. But, when users want authoritative advice or decision making, a male voice is almost always used. For example, 100% of law bots are male, according to a Survey from Maxus. The company also found that a vast majority of finance agents are male - advice is masculine, assistance is feminine. So, then, even if Clifford Nass’ research finding that people simply prefer a female voice to a male one stands up against scrutiny, the selection of gender is pointed.

Very few take the same tact as satnavs by giving the user a choice of gender, presumably partly down to the expense involved in recording the statements in both voices, but also because of the nature of some of the responses - two scripts would have to be written, and this is exactly the problem.

Engadget found that ‘part of the problem with sexism in artificial intelligence appears to be that there aren’t enough women involved in its creation.’ Much like rampant issues of the female voice in the movie industry - an (in many cases) overwhelmingly male representation of the feminine is, more often than not, problematic. Feldman’s gender-neutral banking chatbot Kai ignores regressive or unnecessary requests entirely and turns the conversation back to banking. This, in a sense, makes Kai a different prospect altogether - the bot becomes distinctly artificial and makes no attempt to appear human. Silicon Valley isn’t exactly a known as a beacon of equality and, in a world in which more and more young people are exploring gender, perhaps more tech companies should learn from Feldman and create an AI not just fit for purpose, but fit for its time. 

Comments

comments powered byDisqus
Creative company banner

Read next:

Creation Curation

i