The Dangers of AI Bias: Understanding the Business Risks

As Google seeks to fix AI bias issues within Gemini, concerns over AI and machine learning biases remain as developers consider combatting inaccuracies

Bias perforates the integrity of artificial intelligence (AI) and machine learning (ML) models.

Google recently came under scrutiny over its re-branded conversational model, Gemini (formerly Bard), when the chatbot started generating inaccurate depictions of historical figures. Noting the error, the tech giant has paused people-generating capabilities until it can confirm a solution.

With the company emphasising its commitment to information quality and safeguarding, it has led others to consider the safety and integrity of new AI models and how they impact users.

We consider some real-life examples of AI bias and how the relevant companies have rallied to make fixes.

Holding AI developers accountable for inbuilt biases

IBM describes AI bias as systems that produce biased results that reflect and perpetuate human biases within a society. These include historical and current social inequalities.

Within a business context, AI bias can impact enterprise trust in AI systems and can spark debates over AI ethics. In addition, biased AI can result in discrimination against certain individuals or groups, leading to unfair favouritism.

When considering how bias is built into AI algorithms, Dr Joy Buolamwini explains this as a “coded gaze” - referring to discriminatory practices and exclusionary practices within machine learning.

“Are we factoring in fairness as we’re developing systems?” she said in a speech. “We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought.”

Google needing to make updates to the Gemini model is an example of AI only being as smart as its inventor. If a developer does not inform the model of historical events or existing biases, then it will not factor it into its responses. Similarly, if AI is created with bias in mind, then it can be very damaging for users.

The tech company responded quickly in response to complaints, stating that they were working to address the issue. A Google spokesperson told The Hindu Business Line: “Gemini is built as a creativity and productivity tool and may not always be reliable, especially when it comes to responding to some prompts about current events, political topics, or evolving news. This is something that we’re constantly working on improving.”

Moving towards greater AI equity

When considering these events, it is clear that work is currently being done by companies to ensure that bias does not appear in their results. For instance, Adobe told TechRadar in an interview that it has programmed its Firefly Gen AI tool to consider their race, where they live and how diverse that region is - all to ensure that its image results reflect their reality.

However, there is still plenty more to be done to ensure that AI does not perpetuate biases. 

In recent months, research undertaken by Stanford University suggests that there is existing concern within academic contexts. For instance, it found that GPT detectors often misclassify non-native English writing as AI generated, which raised concerns over discrimination.

Similar concerns have also been raised over businesses using AI to assist in the hiring process, which could cause groups of potential candidates to be overlooked. Another study conducted by IBM in late 2023 suggested that 42% of companies were using AI screening to “improve recruiting” - which the BBC states could be filtering out the best candidates for the job.

Whilst there are plenty of benefits to using AI in a safe and responsible way, it is important to evaluate and manage potential risks to ensure ethical use of its systems.

**************

Make sure you check out the latest industry news and insights at AI Magazine  and also sign up to our global conference series - Tech & AI LIVE 2024

**************

AI Magazine is a BizClik brand 

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology