AI tech could transform how deaf people experience media

By Marcus Law
With thousands of hours of video uploaded to streaming sites every hour, AI sign language tech looks set to transform how deaf people experience media

With thousands of hours of video uploaded to streaming sites like YouTube every hour, people who are deaf or who have difficulty hearing can be excluded from mainstream information and entertainment.

But with advances in technology, artificial intelligence (AI) sign language avatars are being brought to television audiences, with the potential to transform how deaf people experience media.

As many as one in six people are deaf or have difficulty hearing, and there are 70 million sign language users globally, collectively using more than 300 different sign languages.  People who are born deaf, and children of deaf adults (CODA), may learn sign language as their first or only language, and while research has been underway for a number of years, the complexity of sign languages means broadcast-standard AI translation has been a difficult goal to achieve.

AI signers could help meet demand of digital world

Rather than just hand gestures, sign language comprises three elements: hand gestures, body movements and facial expressions, helping signers express meaning, such as raising the eyebrows to turn a phrase into a question.

Sign languages are similar to spoken languages in that they’re not mutually intelligible among themselves. For instance an ASL user would not be able to understand British sign language (BSL) and vice versa because sign languages are developed based on regions’ dialect and culture.

UK start-up Robotica is creating state-of-the-art AI to bring sign language translations to the small screen to make more programming accessible. They say their digital signers already know British Sign Language, and are now learning American, Italian and other sign language, as well as visual signing systems such as Makaton and Cued Speech.

CEO Adrian Pickering says: “There’s a global shortage of sign language translators and interpreters. They work really hard to improve lives in hospitals and courtrooms, at job interviews, helping people buy a new home. It’s a tough job and takes years to learn. Even if there were a hundred times as many translators, there still wouldn’t be near enough to meet the demands of a content-hungry digital world.  

“Every single hour, tens of thousands of new pages are crafted, 30,000 hours of new videos are uploaded to YouTube. The only way that sign language users can gain equality of access to information and entertainment is with machine translation.”

AI and computer vision can help solve problem

Sign languages do not share grammar or concepts with their local spoken counterparts, and typically can’t be written down, meaning for many deaf people subtitles and audio description may be of no help.

“Learning to read English as a second language, without being able to hear it, is like learning to read Korean without knowing how to speak it,” said Catherine Cooper, Robotica’s Product Owner and Deaf Culture Consultant. “For children in particular, subtitles just don’t work.  We need sign language on TV as that’s the language we think and speak.”

Since sign language translation remains relatively experimental, there hasn’t been any system or device that translates ASL to BSL or allows users to translate sign language to any foreign language. Researchers from around the world, however, are developing systems that translate their regions’ sign language.

For instance, researchers from Complex Software Lab at University College Dublin have put together a new AI-based technology that can translate Irish sign language (ISL) into spoken words, and can leverage computer vision and deep learning to capture facial expressions for more accurate translation. 

In 2019, researchers from Michigan State University rolled out a deep learning-backed sensory device called DeepASL, which can translate complete ASL sentences without requiring users to stop after each sign. And in an AI blog, Google research engineers Valentin Bazarevsky and Fan Zhang said the intention of the freely published hand-tracking technology - which can perceive the shape and motion of hands - was to serve as "the basis for sign language understanding"

And Microsoft teamed up with the National Technical Institute for the Deaf to use desktop computers in classrooms that helped students with hearing disabilities via a presentation translator.

Share

Featured Articles

IBM's VP of Build on Where Embeddable AI Stands to Benefit

IBM EMEA's VP of Build Dawn Herndon explains what embeddable AI is and where its main use cases and benefits will come from

Davies Increasing AI Focus with First Group Chief AI Officer

Although the first Group Chief AI Officer role at the firm, the appointment of Paul O'Brien is one step in a long walk to building their AI strategy

Tech & AI LIVE New York: Speaker Announcement

Executives from Ping Identity, ServiceNow and Consumer Technology Association are announced to be joining the line-up at Tech & AI LIVE New York

MLOps: What Is It and How Can It Enhance Operations?

Machine Learning

Kyocera CISO Talks Good Data Security in the Age of Gen AI

Data & Analytics

Sony & AI Singapore Join to Build Language Diversity in LLMs

Machine Learning