Study shows how biases could be maintained by AI devices
There has been increasing use of AI in healthcare to help in developments, diagnosing, and more personalised care. This surge has lead to two Stanford University faculty members calling for efforts to ensure this technology does not worsen existing health care disparities.
In a new paper, the faculty discuss sex, gender, and race bias in medicine and how these biases could be perpetuated by AI devices. The authors have suggested different short and long-term approaches to prevent AI-related bias, such as changing policies at medical funding agencies and for publications to ensure the data collected for studies are diverse.
“As we’re developing AI technologies for health care, we want to make sure these technologies have broad benefits for diverse demographics and populations,” said James Zou, assistant professor of biomedical data science and, by courtesy, of computer science and of electrical engineering at Stanford and co-author of the study.
The researchers suggested that the matter of bias will only become more important as personalised, precision medicine grows in the coming years. Personalised medicine, which is tailored to each patient based on factors such as their demographics and genetics, is vulnerable to injustice if AI medical devices cannot adequately account for individuals’ differences.
“We’re hoping to engage the AI biomedical community in preventing bias and creating equity in the initial design of research, rather than having to fix things after the fact,” said Londa Schiebinger, the John L. Hinds Professor in the History of Science in the School of Humanities and Sciences and senior author of the paper.
Addressing the bias
AI systems are only as good as the quality of their input data. If you can clean your training dataset from conscious and unconscious assumptions on race, gender, or other ideological concepts, you are able to build an AI system that makes unbiased data-driven decisions.
The study outlined challenges that can lead to bias and found they are fundamentally linked to how we design and collect the data used to train and evaluate the algorithms. Technology alone will not fix the issues; social problems that support structural inequality will have to be addressed. Researchers and educators can also do their part to develop education and technologies that strive toward social justice.