The Dangers of AI Bias: Understanding the Business Risks

Google needing to make updates to the Gemini model is an example of AI only being as smart as its inventor
As Google seeks to fix AI bias issues within Gemini, concerns over AI and machine learning biases remain as developers consider combatting inaccuracies

Bias perforates the integrity of artificial intelligence (AI) and machine learning (ML) models.

Google recently came under scrutiny over its re-branded conversational model, Gemini (formerly Bard), when the chatbot started generating inaccurate depictions of historical figures. Noting the error, the tech giant has paused people-generating capabilities until it can confirm a solution.

With the company emphasising its commitment to information quality and safeguarding, it has led others to consider the safety and integrity of new AI models and how they impact users.

We consider some real-life examples of AI bias and how the relevant companies have rallied to make fixes.

Holding AI developers accountable for inbuilt biases

IBM describes AI bias as systems that produce biased results that reflect and perpetuate human biases within a society. These include historical and current social inequalities.

Within a business context, AI bias can impact enterprise trust in AI systems and can spark debates over AI ethics. In addition, biased AI can result in discrimination against certain individuals or groups, leading to unfair favouritism.

When considering how bias is built into AI algorithms, Dr Joy Buolamwini explains this as a “coded gaze” - referring to discriminatory practices and exclusionary practices within machine learning.

“Are we factoring in fairness as we’re developing systems?” she said in a speech. “We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought.”

Google needing to make updates to the Gemini model is an example of AI only being as smart as its inventor. If a developer does not inform the model of historical events or existing biases, then it will not factor it into its responses. Similarly, if AI is created with bias in mind, then it can be very damaging for users.

The tech company responded quickly in response to complaints, stating that they were working to address the issue. A Google spokesperson told The Hindu Business Line: “Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving.”

Moving towards greater AI equity

When considering these events, it is clear that work is currently being done by companies to ensure that bias does not appear in their results. For instance, Adobe told TechRadar in an interview that it has programmed its Firefly Gen AI tool to consider their race, where they live and how diverse that region is - all to ensure that its image results reflect their reality.

However, there is still plenty more to be done to ensure that AI does not perpetuate biases. 

In recent months, research undertaken by Stanford University suggests that there is existing concern within academic contexts. For instance, it found that GPT detectors often misclassify non-native English writing as AI generated, which raised concerns over discrimination.

Similar concerns have also been raised over businesses using AI to assist in the hiring process, which could cause groups of potential candidates to be overlooked. Another study conducted by IBM in late 2023 suggested that 42% of companies were using AI screening to “improve recruiting” - which the BBC states could be filtering out the best candidates for the job.

Whilst there are plenty of benefits to using AI in a safe and responsible way, it is important to evaluate and manage potential risks to ensure ethical use of its systems.

**************

Make sure you check out the latest industry news and insights at AI Magazine  and also sign up to our global conference series - Tech & AI LIVE 2024

**************

AI Magazine is a BizClik brand 

Share

Featured Articles

Sony & AI Singapore Join to Build Language Diversity in LLMs

With Sony AI and AI Singapore broadening the training of LLMs to languages other than English, they hope to better server non-English speaking users

How Wipro Is Using AI to Help One of the US’ Busiest Airport

Leveraging Microsoft’s Azure Data Platform, Wipro's AI solution will consolidate data from various departments to a single platform for JFKIAT to analyse

Moody's: How AI is Changing Financial Analysis

Financial services company Moody's has released a study highlighting how AI is transforming financial analysis

IPhone 16: What Is Included in Its “Apple Intelligence”

AI Applications

Why AI Ranks High on DHL's Logistics Trend Radar

AI Applications

Anthropic Challenging OpenAI with Claude Enterprise Launch

AI Strategy