A very human problem: The battle against bias in AI

Humans have been feeding thousands of years' worth of data into AI algorithms. Now, we are required to correct the bias that infected these datasets

Artificial intelligence (AI) does not have opinions. There are no baked-in beliefs it would like to champion, no self-evident propositions it's itching to share with the world. Instead, AI relies on and learns from huge datasets generated by humans.  

And while nobody consciously includes bias in a database, over time it has crept in. 

Without human intervention, AI could be reinforcing some damaging societal bias, limiting its impact as the next great technical innovation.

AI needs data that is collected and labelled by data scientists, and algorithms defined, trained and tested by AI developers, according to Francesca Rossi, IBM Fellow and AI Ethics Global Leader. And that’s where the problems often start.

“People are biased – mostly in an unconscious way – so it is possible that, without a careful methodology, such biases are embedded in both the data and in any other developer’s decisions in building an AI system,” says Rossi. “We devote significant effort to tools, methods, education and impact assessment processes to ensure that AI bias is detected and mitigated by our developers, consultants, sellers and all other IBMers in their respective roles.” 

A company-wide approach is overseen by a centralised governance structure, with the IBM AI Ethics board providing guidance and support throughout.

The IBM playbook goes beyond fairness and also covers other trustworthy AI pillars, such as transparency, explainability, robustness, and privacy, explains Rossi. “In this way, we ensure that the AI systems we build and deliver to our clients can be trusted and have a positive impact on the relevant communities.”

Alteryx says executives need to know what AI and ML mean

Although AI can be trained to perform many tasks without human interaction, it’s essential those designing the system fully understand any possible defects before AI amplifies them, says Alan Jacobson, Chief Data and Analytic Officer at Alteryx.

“While analysts and data scientists often build machine learning (ML) models, those in executive positions and other leadership roles frequently need to understand the results. Explainable AI (XAI) offers significant transparency and trust advantages over black-box models.

“These interpretable models ensure that ML, AI algorithms and the reasoning behind a specific result are more understandable to people who don't have a data science background.”

For example, an ML model trained using a collection of financial data to help approve or deny a loan applicant could use XAI to provide not only the answer but also detail how and why it arrived at its response, explains Jacobson.

“Rather than believing AI will simply deliver the correct autonomous insights, it’s imperative to fully understand how and why it arrived at the answer it did,” says Jacobson. “The importance of explainable AI goes beyond making the wrong decision.”

Accenture examines “what is fair” to tackle AI bias

There is still a lot to be done by people as well as machines. Identifying and prioritising which biases to address requires experts from a wide range of disciplines to develop and implement technical improvements, says Sue Tripathi, Managing Director, Global GTM, Data, AI at Accenture.

“To complicate matters, making ‘fair’ decisions to minimise bias implies there is a uniform and standard way in defining ‘fairness’, which is not the case,” explains Tripathi. “How then do we define ‘fairness’ and also measure it?”

Those involved in codifying definitions of fairness and attempting to provide ‘fairness’ metrics highlight the complexities involved. Various academic centres of AI, highlight cases where determining the “fair” percentage in their algorithms may not actually represent the real world in terms of pay or health equity practices, for example.

“Should an organisation set different decision thresholds for different groups, based on race, gender, social, and economic factors?” asks Tripathi. “Is there a single universal definition of fairness or metrics that can be applied?”

Democratise data to do away with AI bias, says Alteryx

Upskilling in-house experts in data literacy and analytics is also a crucial preventative measure to avoid the pitfalls and ethical issues around deploying AI, explains Alteryx’s Jacobson. 

“Cloud-based and on-premise platforms with drag-and-drop – or no-code/low-code – and automated ML capabilities will shorten the learning curve for many users,” he says. “Additionally, platforms that offer guided paths and recommendations for model usage, while providing clear data lineage trails and notes for explaining data points, will provide the level of interpretability and explainability needed to help democratise data science understanding and accessible AI implementation across your domain experts.”

But even the experts working around the clock on these problems say we shouldn’t expect bias to be entirely removed from artificial intelligence any time soon.

“AI bias can be detected and mitigated, but often it cannot be completely eliminated, because of intersectionality issues,” says IBM’s Rossi. “While decreasing bias over some protected variable, one may increase bias over another protected variable. This is why inclusiveness, transparency and explainability are so fundamental in AI models. 

“These properties allow AI users to know what kind and how much bias is still present in the AI system, and to make an informed decision on whether it is appropriate to use the AI system in the deployment environment. It is therefore essential to have a global approach to AI ethics and not just focus on a single issue.”

Human judgement and processes are still needed to ensure AI-supported decision-making or prediction is fair and unbiased, says Accenture’s Tripathi. 

She concludes: “A confluence of humans and machines working together offers many prospects that may well lead to a common language and standardisation in how best AI could operate in multiple contexts, while reducing bias.”

Share

Featured Articles

Should Tech Leaders be Concerned About the Power of AI?

With insights from Blackstone CEO Steve Schwarzman, we consider if tech leaders are right to be anxious about AI innovation and if regulation is necessary

Andrew Ng Joins Amazon Board to Support Enterprise AI

In the wake of Andrew Ng being appointed Amazon's Board of Directors, we consider his career from education towards artificial general intelligence (AGI)

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

OpenAI announces updates for its GPT-4 Turbo model to improve efficiencies for AI developers and to remain competitive in a changing business landscape

Meta Launches AI Tools to Protect Against Online Image Abuse

AI Applications

Microsoft in Japan: Investing in AI Skills to Boost Future

Cloud & Infrastructure

Microsoft to Open New Hub to Advance State-of-the-Art AI

AI Strategy