Dr Joy Buolamwini: Helping Tech Giants Recognise AI Biases

With a longstanding commitment to AI ethics, Dr Joy Buolamwini continues to hold AI developers accountable as they work to tackle machine learning biases

Keen to continue highlighting societal implications of AI, Dr Joy Buolamwini uses her research to ensure that the technology is more equitable and accountable.

Self-described as a poet of code, she founded the Algorithmic Justice League (AJL) to create a world with more equitable and accountable AI technology. Her MIT thesis methodology uncovered large racial and gender bias in AI services from companies like Microsoft, IBM, and Amazon - who very quickly worked to improve their software as a result. 

Her main concern is tackling global AI biases, which is often an unintended consequence of developers not using wide enough - or diverse enough - datasets to develop their models.

Working to eliminate built-in AI biases

Buolamwini's research has been covered in over 40 countries and she continually champions the need for algorithmic equalities at the World Economic Forum and the United Nations.

Committed to reducing widespread AI harm, Buolamwini also serves on the Global Tech Panel convened by the Vice President of the European Commission to advise world leaders and technology executives. She has also written pieces on the impact of AI for publications like TIME Magazine and The New York Times.

During a time that is witnessing businesses race to implement new AI technologies, it has become increasingly clear that these companies are harnessing AI ‘for the sake of it’ rather than having clear intentions. Buolamwini’s work seeks to highlight that, as a result, AI could be developed in a way that is not only unsafe, but also exclusionary.

Algorithmic bias, which Buolamwini calls “coded gaze” in the below video, refers to the discriminatory practices and exclusionary practices within machine learning. As a result, Buolamwini is now dedicated to confronting these biases that exist throughout AI technology, in addition to working with businesses to take accountability in coding.

“Are we factoring in fairness as we’re developing systems?” she says. “We now have the opportunity to unlock even greater equality if we make social change a priority and not an afterthought.”

Her thesis came about during her time as a researcher at the MIT Media Lab, where she worked to identify bias in algorithms and to develop practices for accountability during their design. During her research, Buolamwini showed 1,000 faces to facial recognition systems and asked the systems to identify whether faces were female or male. She ended up discovering that the software found it hard to identify dark-skinned women.

Buolamwini's research into this topic was subsequently cited in 2020 as an influence for both Google and Microsoft in addressing gender and race bias in their products and processes.

AJL and beyond: Holding businesses accountable

Throughout her career, Buolamwini has continued to advocate for AI and machine learning companies using wider datasets when it comes to developing their models. As a result, tools like facial recognition will become more accurate and represent more groups of people.

To promote AI that is both equitable and accountable, Buolamwini founded AJL in 2016. The organisation works to combine art and research to point out the potential societal implications and harms of AI technology, in addition to raising public awareness over how AI is impacting our world. It also continues to advocate for further research into AI ethics and provides resources and a live blog.

Buolamwini published her first book in 2023. Unmasking AI: My Mission to Protect What Is Human in a World of Machines highlights her research.

When it comes to ethical AI business development, Buolamwini says in an interview with Salesforce: “Even if we have the best intentions, what happens if something goes wrong? There has to be some form of redress.” 

She adds: “Being intentional about keeping humans in certain jobs is important. We need to be really careful with preserving the human element, when there’s so much excitement and hype.”

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

AI and Broadcasting: BBC Commits to Transforming Education

The global broadcaster seeks to use AI to make its education offerings personalised and interactive to encourage young people to engage with the company

Why Businesses are Building AI Strategy on Amazon Bedrock

AWS partners such as Accenture, Delta Air Lines, Intuit, Salesforce, Siemens, Toyota & United Airlines are using Amazon Bedrock to build and deploy Gen AI

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

AI Strategy

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Technology

Should Tech Leaders be Concerned About the Power of AI?

Technology