Study shows how biases could be maintained by AI devices

The use AI in healthcare can benefit everyone, but if left unchecked, the technologies could unintentionally perpetuate sex, gender, and race biases

There has been increasing use of AI in healthcare to help in developments, diagnosing, and more personalised care. This surge has lead to two Stanford University faculty members calling for efforts to ensure this technology does not worsen existing health care disparities. 

In a new paper, the faculty discuss sex, gender, and race bias in medicine and how these biases could be perpetuated by AI devices. The authors have suggested different short and long-term approaches to prevent AI-related bias, such as changing policies at medical funding agencies and for publications to ensure the data collected for studies are diverse. 

“As we’re developing AI technologies for health care, we want to make sure these technologies have broad benefits for diverse demographics and populations,” said James Zou, assistant professor of biomedical data science and, by courtesy, of computer science and of electrical engineering at Stanford and co-author of the study.

The researchers suggested that the matter of bias will only become more important as personalised, precision medicine grows in the coming years. Personalised medicine, which is tailored to each patient based on factors such as their demographics and genetics, is vulnerable to injustice if AI medical devices cannot adequately account for individuals’ differences.

“We’re hoping to engage the AI biomedical community in preventing bias and creating equity in the initial design of research, rather than having to fix things after the fact,” said Londa Schiebinger, the John L. Hinds Professor in the History of Science in the School of Humanities and Sciences and senior author of the paper.

Addressing the bias

AI systems are only as good as the quality of their input data. If you can clean your training dataset from conscious and unconscious assumptions on race, gender, or other ideological concepts, you are able to build an AI system that makes unbiased data-driven decisions.

The study outlined challenges that can lead to bias and found they are fundamentally linked to how we design and collect the data used to train and evaluate the algorithms. Technology alone will not fix the issues; social problems that support structural inequality will have to be addressed. Researchers and educators can also do their part to develop education and technologies that strive toward social justice.

 

Share

Featured Articles

The Dangers of AI Bias: Understanding the Business Risks

As Google seeks to fix AI bias issues within Gemini, concerns over AI and machine learning biases remain as developers consider combatting inaccuracies

Fujitsu to Combat 5G+ Network Complexities with AI

Fujitsu announces Virtuora IA, a collection of AI-powered network applications that used machine learning models to improve mobile network performance

MWC Barcelona 2024: The Power of AI in the Telco Industry

Mobile Magazine is live at MWC Barcelona 2024 this week! Discover some of the leading trends and businesses that are digitally transforming the industry

Upskilling Global Workers in AI with EY’s Beatriz Sanz Saiz

AI Strategy

Intuitive Machines: NASA's Odysseus bets on Private Company

Data & Analytics

Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Machine Learning