Vital that businesses realise AI bias risk, regulator warns
As the use of artificial intelligence (AI) is to be monitored by an equality regulator to ensure technologies are not discriminating against people, there is emerging evidence that algorithms can in some circumstances perpetuate bias.
And with research suggesting that factors as simple as the web browser you use, how fast you type or whether you sweat during an interview can lead to AI making a negative decision about you, it is vital for organisations to understand these potential biases to avoid discriminating against people.
Technology ‘a force for good’ but important businesses understand risk of AI bias
The Equality and Human Rights Commission (EHRC) last week published new guidance to help organisations avoid breaches of equality law, giving practical examples of how AI systems may be causing discriminatory outcomes.
The UK-based regulator is not the first to look into monitoring AI usage. Baltimore and New York City have passed local bills that would prohibit the use of algorithmic decision-making in a discriminatory manner, while the US states of Alabama, Colorado, Illinois and Vermont have passed bills creating a commission, task force or oversight position to evaluate the use of AI and make recommendations regarding its use.
And in an attempt to counter AI bias, the European Union has proposed new legislation in the form of the Artificial Intelligence Act, which suggests that AI systems used to help employ, promote or evaluate workers should be subject to third-party assessments.
Responding to the EHRC’s guidance, Marcial Boo, its chief executive, said it was essential that businesses understood the impact of technology on people.
“While technology is often a force for good, there is evidence that some innovation, such as the use of artificial intelligence, can perpetuate bias and discrimination if poorly implemented.
“Many organisations may not know they could be breaking equality law, and people may not know how AI is used to make decisions about them.
“It’s vital for organisations to understand these potential biases and to address any equality and human rights impacts.”
Concerns over ‘urgent need’ to protect public from discimination through AI
In a study, published earlier this year in the journal ‘Tulane Law Review‘, author Professor Sandra Wachter of the Oxford Internet Institute said decisions being made by AI programmes could “prevent equal and fair access to basic goods and services such as education, healthcare, housing, or employment”.
“AI systems are now widely used to profile people and make key decisions that impact their lives,” she said. “Traditional norms and ideas of defining discrimination in law are no longer fit for purpose in the case of AI and I am calling for changes to bring AI within the scope of the law.”
There is an “urgent need to amend current laws to protect the public from this emergent discrimination through the increased use of AI,” the research also warned.
According to the World Economic Forum, biases can also occur when machine learning algorithms are trained and tested on data that under-represent certain subpopulations, such as women, people of colour or people in certain age demographics.
For example, studies show that people of colour are particularly vulnerable to algorithmic bias in facial recognition technology.
And businesses need to carefully scrutinise the data ingested by AI, wrote Dr Rob Walker for AI Magazine. “If they don’t, irresponsibly-used AI can proliferate, creating unfair treatment of certain populations – like unduly limiting loans, insurance policies, or product discounts to those who really need them. This isn’t just ethically wrong, it becomes a serious liability for organisations that are not diligent about preventing bias in the first place.”