Study shows how biases could be maintained by AI devices

The use AI in healthcare can benefit everyone, but if left unchecked, the technologies could unintentionally perpetuate sex, gender, and race biases

There has been increasing use of AI in healthcare to help in developments, diagnosing, and more personalised care. This surge has lead to two Stanford University faculty members calling for efforts to ensure this technology does not worsen existing health care disparities. 

In a new paper, the faculty discuss sex, gender, and race bias in medicine and how these biases could be perpetuated by AI devices. The authors have suggested different short and long-term approaches to prevent AI-related bias, such as changing policies at medical funding agencies and for publications to ensure the data collected for studies are diverse. 

“As we’re developing AI technologies for health care, we want to make sure these technologies have broad benefits for diverse demographics and populations,” said James Zou, assistant professor of biomedical data science and, by courtesy, of computer science and of electrical engineering at Stanford and co-author of the study.

The researchers suggested that the matter of bias will only become more important as personalised, precision medicine grows in the coming years. Personalised medicine, which is tailored to each patient based on factors such as their demographics and genetics, is vulnerable to injustice if AI medical devices cannot adequately account for individuals’ differences.

“We’re hoping to engage the AI biomedical community in preventing bias and creating equity in the initial design of research, rather than having to fix things after the fact,” said Londa Schiebinger, the John L. Hinds Professor in the History of Science in the School of Humanities and Sciences and senior author of the paper.

Addressing the bias

AI systems are only as good as the quality of their input data. If you can clean your training dataset from conscious and unconscious assumptions on race, gender, or other ideological concepts, you are able to build an AI system that makes unbiased data-driven decisions.

The study outlined challenges that can lead to bias and found they are fundamentally linked to how we design and collect the data used to train and evaluate the algorithms. Technology alone will not fix the issues; social problems that support structural inequality will have to be addressed. Researchers and educators can also do their part to develop education and technologies that strive toward social justice.

 

Share

Featured Articles

Accenture Commits to Expanding its AI Vision with Adobe

Focusing its AI strategy on company transformation, Accenture partners with Adobe to develop industry-specific solutions using Gen AI to empower businesses

Businesses are not ‘Data Ready’ for Gen AI, says Alteryx

A report by Alteryx finds that organisations must prepare, as they are not ready to unlock real value from Gen AI as a result of insufficient data stacks

TacticAI: Google DeepMind Pioneer a Sports-Led AI Assistant

Google DeepMind’s TacticAI has been launched as part of a research collaboration with Liverpool FC to transform the sporting experience with AI

Bumble: Harnessing AI to Power Human Relationships

Data & Analytics

Kheiron Medical Technology can Detect Cancer with AI Test

Data & Analytics

Who is Mustafa Suleyman? DeepMind Founder Turned AI CEO

Machine Learning