Driven by the era of big data and deep learning models, Natural Language Processing (NLP) applications within organisations have grown exponentially in terms of their capabilities and functionalities.
Now, with the ability to identify semantic meaning through the mining of a large corpus of text, NLP opens new paths to digitise company knowledge, disrupting incumbent business models. The technology can dramatically improve a company’s ability to discern actionable insights from unstructured data by processing and understanding human ‒ or natural ‒language.
“We see the emergence of NLP used as a strategic differentiator, where data-driven decision making that leverages specialised knowledge is used to process claims, inspect, and ensure compliance, review contracts and risks, extraction, and the repudiation of events,” comments Ronny Fehling, Partner and Director at BCG GAMMA.
“This new wave is important, in that these systems not only capture and represent internal company strategic knowledge, but also influence ‒ and even automate ‒ decisions and business processes, representing new business model change opportunities,” he adds.
One key way NLP comes into play ‒ particularly in the wake of COVID, as more staff work remotely, attend webinar training over in-person training, and conduct video meetings ‒ is speech recognition technology. For comprehensive speech recognition to occur, NLP identifies and interprets spoken language, converting these words and phrases into texts.
As more meetings are conducted online, businesses are beginning to see the value in transcription, as it allows important notes from the meeting agenda to be recorded, while also streamlining the next one.
Despite the expected growth of the speech and voice recognition market ‒ which is expected to be worth US$28.3bn by 2026 ‒ there are still significant issues around bias that need to be addressed.
Biased data leading to biased software
Like many AI and machine learning technologies, the quality of speech recognition is entirely dependent on the data upon which it is trained.
With society possessing and perpetuating long-held biases, it’s inevitable that machines learning from related data will reflect this inherent bias. This is particularly significant as the public data with which NLP and speech recognition technologies are trained tends to come from a small section of society.
Explaining that bias in speech recognition typically manifests in two ways ‒ either misunderstanding users or not understanding them at all, both of which have major implications for the users as well as the company developing the technology ‒ Emerson Sklar, Senior Director, AI & ML, Applause, says: “For users, peoples’ acceptance of issues with speech recognition products is far lower than it is with traditional graphical apps, meaning they will become frustrated with the experience and with your brand far faster than might otherwise be expected. For businesses, the underlying purpose of implementing AI is to achieve significant ROI through either streamlining heavily manual practices or through accomplishing something otherwise impossible. Any bias present in such a system simply reduces its precision, efficiency, and ability to fulfil its commercial requirements.”
These biases are extremely important to eradicate, especially as we strive for racial and gender equality.
Sadly, as these biases are ingrained across societies, Denise Gosnell, Strategy Team at DataStax, outlines how this can have a real impact on the technology: “In Google’s word2vec, the most popular implementation of word embeddings, real bias has been observed. Word2vec gives developers the possibility to translate words into vectors and to do basic maths on words and topics. A concrete example: using word2vec in 2017(1,2) you could predict words with maths, such as “King - Man + Woman” and obtain the result “Queen”, but also “Doctor - Man + Woman” and get “Nurse”.”
She adds: “This is when the machine learning community started realising how word embeddings could make mistakes. The application of maths was the same because it was made of addition and subtraction to make predictions, but the problem came from the inputs, publicly available writings and articles written by humans. From the shocking discovery of gender and racial bias in word2vec, a whole field of research was sparked ‒ but machine learning researchers are continuing to have problems with written language.”
Overcoming bias in NLP and speech recognition
The first step in overcoming the bias in NLP and speech recognition is recognising that the bias itself exists. Despite there being many approaches, tackling bias comes with no quick fix and technologists need to take a delicate and thorough approach.
“Design techniques such as Human-in-the-loop, participatory design and multi-stakeholder involvement will help in the early stages of an AI’s development. When choosing the training data sets, question whether they’re suitable for the outcome function, applications, and domains, and it can help reduce statistical bias. There are several techniques to achieve a more balanced statistical representation in datasets, such as various class imbalance measures to detect, as well as the mitigation of bias in datasets,” comments Fehling
“Social and cultural factors must form part of the analysis, as they often cannot be directly captured by the aforementioned techniques. Such issues of ‘flattening’ the societal and behavioural factors inherent in the datasets themselves is often overlooked, yet results in a problematic effect,” he continues.
Adding to this, Gosnell explains: “Every machine learning project must use methods to verify and measure the presence of bias in its systems because there are myriad ways that gender, racial, social, and other biases can exist within a system.”
“It is necessary to bring perspectives into new processes: checks and balances for bias must be woven throughout the development of machine learning systems. And, finally, through new processes ‒ bringing a different set of perspectives to the table and including them in all phases of the project ‒ it is possible to reach new inclusive behaviours,” she adds.
Concluding, Sklar notes the importance of ethics and how imperative it is that regulation takes this into consideration to ensure the technology is both unbiased and ethical: “Regardless of the regulatory requirements, ethics is critical to any technology solution, and especially true for AI-powered solutions. From a development standpoint, you can ensure you’re treating your users ethically and in an unbiased manner by training your AI systems with as broadly representative a dataset as possible, listening to user feedback, and constantly iterating to continue to eliminate bias where present. Although the field of voice ethics is nascent, the non-profit Open Voice Network is leading the charge in developing the standards and guidelines to make every voice experience worthy of user trust, and provide a wealth of research and recommendations to help organisations achieve those same goals.”