Speechmatics: One third of people have experienced AI bias

Share
Global experts in deep learning and voice technology, Speechmatics has released the new Voice Report 2022, sharing insights into the industry

Speechmatics has released its new Voice Report 2022, exploring a continued boom in the voice technology industry, looking into the development, expectations, and trends going forward. 

In the UK, investment in the deep-tech sector has grown significantly over the past five years, rising 291% to £2.3billion according to the Annual Small Business Equity Tracker from the British Business Bank.

The future also looks increasingly bright for the global speech-to-text API market. According to Research & Markets’ annual review, the industry is projected to grow from $2.2 billion in 2021 to $5.4 billion by 2026, at a Compound Annual Growth Rate (CAGR) of 19.2% during the forecast period.

This growth will be driven by a number of different factors, many tied to the continued global pandemic, such as n increasing demand for AI-powered customer services and chatbots.

Working to address bias in AI

As AI continues to become an ever more present part of our daily lives, it’s up to corporate organisations to be as proactive as they can be to ensure fairness. 28.9% of respondents had experienced AI bias first-hand when using voice recognition.

One of the main concerns pointed out by the respondents to the report was the bias against dialects (21.2%) and accents (30.4%). 

In a recent essay by the World Economic Forum (WEF), Agbolade Omowole, CEO for Mascot IT Nigeria looked at strategies for mitigating fairness and non-discrimination risks and arrived at three conclusions. The first was around inclusive design and foreseeability. He suggests people look at race, gender, class, and culture at a design stage. Second, when it comes to user testing, groups should be as diverse as possible. His third recommendation was to perform STEEPV analysis to detect fairness and non-discrimination risks in practice.

“What does “fairness in AI” look like? Should a company of sufficient market size have to submit its algorithms for analysis to make sure it is not intentionally and unintentionally excluding disadvantaged groups? Are we prepared for the gremlins we may find? Companies working with voice should be asking themselves tough questions to ensure that our definition of ‘accuracy’ is as inclusive as possible.” said Michael Tansini, Product Manager at Speechmatics. 

The ongoing impact of the pandemic on the technology industry 

The COVID-19 pandemic caused major disruption to organisations across the globe and had a major impact on the technology industry. Voice technology one of the many industries that have seen a usage surge as consumers and businesses find a way to adapt to the new world of lockdowns and living more online.

Speechmatics asked which sectors people thought would see their use and application of voice technology significantly increased. Nearly half of those asked believed Banking and Healthcare were the two industries most likely to increase their need for voice technology in the aftermath of COVID-19, with each achieving 13.9% of the vote. Consumer industries came out ahead of both Media and Entertainment, and Government. In last year’s survey, Telecommunication was top. This year, it has fallen below a range of industries.

AI developments for the next three years

Increased speaker diarization accuracy is top of the list (13.5%) for hopes going into the next three years. This is the technical process of splitting up an audio recording stream that often includes a number of speakers into homogeneous segments.

This being top of the list could be attributed to the pandemic and the major ascendency of conference calls. To be able to detect who is speaking - and easily jump between the speakers – would have a huge benefit. 2022 will likely see increased effort to improve speaker diarization to uplift use cases that benefit from being able to match a speaker with the words spoken.

Language identification (9.3%) was another key element respondents predicted would see an improvement in the coming year. Detecting the language of the speakers within a video or audio file automates the manual task of selecting the correct language pack to use to transcribe it. By automating the language identification element of the transcription process, businesses can save time and human resource cost as well as unlock new information that would previously have been lost.

Moving forward, the progress in technology comes with a greater sense of responsibility. Organisations who chose to use AI must make sure they check for any bias and work to improve the technology for a greater sense of inclusion. 

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy