When you think of UX design, your mind probably initially goes to visual design elements like information architecture and interaction design. Similarly, if you search for a stock photo to represent UX design, the majority are related to wireframes - visual mock ups of the placement of features on a screen. The focus on the visual elements of UX design is unsurprising when most popular technology is, primarily, made up of screens, but emerging tech could signal a departure. The likes of VR and AR (though visual) require an entirely different mindset, for example, and voice-controlled tech will be an entirely different proposition altogether.
Arguably, there has been no bigger UX design challenge since the emergence of the smartphone than that posed by voice assistants. Voice-controlled technology has been a feature on some devices for most of this decade - Siri debuted in 2011, before Google’s own personal assistant was introduced in 2012 - but only in recent years has it become the foundation of products in its own right.
Largely, this is down to improvements in technology; just a decade ago Amazon’s Echo would not have been possible. Advancements in natural language processing are facilitating something of a revolution in the way we interact with technology. According to Shawn DuBravac, we are fast approaching the point at which machines reach parity with humans in terms of voice recognition and comprehension, with accuracy rates hitting around 94%. ‘We’re ushering in an entirely new era of faceless computing,’ DuBravac said, an era in which UX comes down to how well a machine can understand you and how appropriately it can respond.
Ultimately, voice-controlled UX is about conversational flow. With no visual elements to take into account, the key is ensuring that the interaction feels natural, frictionless, and intuitive. A revealing article [https://medium.com/cbc-digital-labs/adventures-in-conversational-interface-designing-for-the-amazon-echo-be15d792ae49?ref=webdesignernews.com] from Natasha Rajakariar - user experience designer at CBC - gives us an insight into the kinds of things UX designers think about when creating that frictionless experience. For one, the instruction manual should give way to natural back and forth. Users should be able to ask machines if they can perform a certain task, rather than flicking through instructions to achieve the same end.
Secondly, the more natural the responses from the bot once it has understood the command, the better the experience will be. As Rajakariar discovered, ‘if interaction with the tool was too robotic or repetitive, we became annoyed. On the other hand, if we felt like we were talking to another human, our experience was more positive. It was therefore essential that we adjust Alexa’s commands until they sounded natural.’
It’s these kinds of challenges that make voice-control such a unique proposition for designers, and it’s one they’d do well to get to grips with quickly. According to Gartner, 30% of web browsing sessions will be done without a screen by 2020, while comScore expects 50% of all searches to be done by voice by the same year. People are resistant to change and going screen-less is a big adjustment, but a fully conversational interaction with a machine which responds as you might expect a human to will feel comfortable almost immediately. This is the challenge going forward.
Voice-controlled interfaces are just one of the emerging technologies set to dominate UX/UI design in the coming years, too. Elsewhere, VR and AR will pose different but equally challenging problems for designers, particularly given how radically different a proposition VR is compared to any medium that preceded it. As we move away from the touch screen, the mouse, and the keyboard as the dominant input devices, UX has the opportunity to become properly intuitive and to further remove friction from tasks. As some commentators begin predicting the death of the smartphone - which could come sooner than you think - it is time for UX design to properly diversify as a discipline.