AI tech could transform how deaf people experience media

By Marcus Law
With thousands of hours of video uploaded to streaming sites every hour, AI sign language tech looks set to transform how deaf people experience media

With thousands of hours of video uploaded to streaming sites like YouTube every hour, people who are deaf or who have difficulty hearing can be excluded from mainstream information and entertainment.

But with advances in technology, artificial intelligence (AI) sign language avatars are being brought to television audiences, with the potential to transform how deaf people experience media.

As many as one in six people are deaf or have difficulty hearing, and there are 70 million sign language users globally, collectively using more than 300 different sign languages.  People who are born deaf, and children of deaf adults (CODA), may learn sign language as their first or only language, and while research has been underway for a number of years, the complexity of sign languages means broadcast-standard AI translation has been a difficult goal to achieve.

AI signers could help meet demand of digital world

Rather than just hand gestures, sign language comprises three elements: hand gestures, body movements and facial expressions, helping signers express meaning, such as raising the eyebrows to turn a phrase into a question.

Sign languages are similar to spoken languages in that they’re not mutually intelligible among themselves. For instance an ASL user would not be able to understand British sign language (BSL) and vice versa because sign languages are developed based on regions’ dialect and culture.

UK start-up Robotica is creating state-of-the-art AI to bring sign language translations to the small screen to make more programming accessible. They say their digital signers already know British Sign Language, and are now learning American, Italian and other sign language, as well as visual signing systems such as Makaton and Cued Speech.

CEO Adrian Pickering says: “There’s a global shortage of sign language translators and interpreters. They work really hard to improve lives in hospitals and courtrooms, at job interviews, helping people buy a new home. It’s a tough job and takes years to learn. Even if there were a hundred times as many translators, there still wouldn’t be near enough to meet the demands of a content-hungry digital world.  

“Every single hour, tens of thousands of new pages are crafted, 30,000 hours of new videos are uploaded to YouTube. The only way that sign language users can gain equality of access to information and entertainment is with machine translation.”

AI and computer vision can help solve problem

Sign languages do not share grammar or concepts with their local spoken counterparts, and typically can’t be written down, meaning for many deaf people subtitles and audio description may be of no help.

“Learning to read English as a second language, without being able to hear it, is like learning to read Korean without knowing how to speak it,” said Catherine Cooper, Robotica’s Product Owner and Deaf Culture Consultant. “For children in particular, subtitles just don’t work.  We need sign language on TV as that’s the language we think and speak.”

Since sign language translation remains relatively experimental, there hasn’t been any system or device that translates ASL to BSL or allows users to translate sign language to any foreign language. Researchers from around the world, however, are developing systems that translate their regions’ sign language.

For instance, researchers from Complex Software Lab at University College Dublin have put together a new AI-based technology that can translate Irish sign language (ISL) into spoken words, and can leverage computer vision and deep learning to capture facial expressions for more accurate translation. 

In 2019, researchers from Michigan State University rolled out a deep learning-backed sensory device called DeepASL, which can translate complete ASL sentences without requiring users to stop after each sign. And in an AI blog, Google research engineers Valentin Bazarevsky and Fan Zhang said the intention of the freely published hand-tracking technology - which can perceive the shape and motion of hands - was to serve as "the basis for sign language understanding"

And Microsoft teamed up with the National Technical Institute for the Deaf to use desktop computers in classrooms that helped students with hearing disabilities via a presentation translator.

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology