When we talk about artificial intelligence (AI), the conversation is often tainted by a sense of trepidation. The technology is undeniably powerful, and for decades humanity has been fascinated by its potential in both destructive and constructive visions of the future. Ultimately, the likelihood is that the reality will be comparatively muted, a world in which people become accustomed to machine assistance but are unlikely to be overwhelmed by an army of sentient robots fuelled by murderous indignation.
Similarly, not all incarnations of AI will be co-opted by big business to help sell products; there will be genuinely positive applications. One of the tech industry’s major issues is that it rarely caters for disabled users. Most tech is fundamentally audiovisual, meaning those with impairments are often left using disappointingly inadequate accessibility features. The development of AI could, and in many ways already is, easing this problem. Intelligent automation can bring previously impossible levels of accessibility to otherwise problematic technologies.
YouTube, for example, is not subject to the same rules as TV broadcasters, and therefore is not obligated by the FCC to include captions on its videos to aid deaf viewers. Indeed, manually creating subtitles for its endless catalogue of videos given that 300 hours are uploaded every minute, simply wouldn’t be possible. Instead, the company has used speech-to-text software since 2009, tech that can detect speech to a relatively high degree of accuracy. Earlier this year, though, YouTube rolled out algorithms that can detect applause, music, laughter, and other non-verbal sounds for captioning, transforming the experience of watching subtitled YouTube videos into something altogether more rounded. ‘Machine learning is giving people like me that need accommodation in some situations the same independence as others,’ says Liat Kaver, a product manager at YouTube who is deaf.
Visual impairment can be a major restricting factor for social media users, too. Voiceover features on mobile devices and laptops allow users to hear the text on any given web page, but on mediums dominated by picture and video content, the tech currently falls short. In April this year, Facebook rolled out its artificial intelligence software that can describe photos to blind users. Though the technology is currently in its infancy, it can identify different objects, determine whether those pictured are smiling, and even whether or not a picture is a selfie.
Ultimately, it’s about ensuring that no one is excluded from enjoying tools like YouTube and Facebook on the basis of a lack of accessibility. ‘People with intellectual disabilities, or any disability, want to do what their friends and sisters and brothers do — use smartphones, tablets, and social networking,’ says Ineke Schuurman, a researcher at the University of Leuven. It’s a simple point but one that is driving AI development in disability aid the demand for accessible products is high. The delay in bringing the products to market isn’t so much born out of neglect, rather the technology simply hasn’t previously been available to make it a reality. Will Scott, a researcher at IBM, said: ‘The computing power and algorithms and cloud services like Watson weren’t previously available to perform these kinds of things.’ Now that they are, expect to see new products proliferate.
But AI’s impact on the lives of the less able won’t be confined to the screen. There is a great deal of investment being piled into the notion of caregiving robots - precise, diligent carers free from inconveniences like the need to sleep or have a life outside of the home. Theoretically speaking, robots could perform many of the tasks the elderly or the disabled find difficult to do alone, and advancements in machine learning are pushing the tech closer and closer to fruition. One such robot is Honda’s ASIMO. The humanoid that can recognize faces, converse using AI, is dexterous by virtue of its agile fingers, can make decisions without human intervention, and can comfortably walk and run. The idea is that an intelligent robot could be programmed to fit the exact needs of a particular individual, be it helping with household chores, reminding people to take medicines, or simply making coffee.
Or helping out in the classroom, as a team of sophomore students from Rutgers University have been exploring. At the TechCrunch Disrupt Hackathon this May in New York, Robota was presented, a social robot built to aid teachers in special needs classrooms. Through personal experience, the students found that some special needs children are more comfortable interacting with a robot than a human being, the children viewing them as nonjudgmental figures. Through computer vision and sentiment analysis technology, Robota is able to adjudge the emotional state of a child and identify those who are visibly or audibly distressed, before approaching and prompting the child to share what is wrong. With this information, Robota can inform teachers or other support staff that there is a situation, based on the response given.
What both developments aim to do is ease the burden on support and care workers by automating the more menial, time consuming tasks. Couple all of this with developments like driverless cars, which will see the visually impaired offered a level of personal freedom currently unavailable, and it’s possible that AI could help mitigate significant boundaries for the disabled and the elderly. Yes, there is the chance that intelligent automation could become too powerful and enslave the human population. In the meantime, though, it might just make lives easier.