Ethical AI a matter of life or death for medical “carebots”

The rise of robots and AI calls for collaborative efforts between ethicists and engineers, says the team hoping to make automated assistants more human

A team of North Carolina State University researchers has developed plans for a set of algorithms that could allow a future “carebot” to make complex decisions about prioritising the treatment of human patients.

The new algorithms are designed to help incorporate ethical guidelines into artificial intelligence decision-making programs such as virtual assistants known as carebots used in healthcare settings, says the research team.

“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, older adults and other people who require health monitoring or physical assistance,” says paper author Veljko Dubljević and an Associate Professor in the Science, Technology & Society program at NCSU. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”

Dubljević presents an example in which a carebot is given the task of giving medical assistance to two people. One patient is unconscious, but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. 

“How does the carebot decide which patient is assisted first?” asks Dubljević. “Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”

While most decisions require a focus on outcomes and consequences, the researchers point to two factors considered by humans who introduce moral judgements to the process.

The first is the intent of an action and the character of the individual performing this action - who is performing the action and what are they trying to accomplish? The second factor is the action itself, say researchers, as people can view certain actions, such as lying, as inherently bad.

Complexities arise when these factors interact, say the team. For example, lying may be bad, but if a nurse lies to a patient making unnecessary demands in order to prioritise the treatment of a patient in more urgent need, most humans would see this as morally acceptable, so no longer a bad thing.

New model opens up opportunities for human-AI teams technology

Researchers developed a mathematical formula and series of related decision trees which can be used in AI programs to create an Agent, Deed, and Consequence (ADC) Model, developed by Dubljević and colleagues to reflect how people make complex, real-world ethical decisions.

Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making, says Dubljević. “Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies.

“Our goal here was to translate the ADC Model into a format that makes it viable to incorporate into AI programming. We’re not just saying that this ethical framework would work well for AI, we’re presenting it in language that is accessible in a computer science context. With the rise of AI and robotics technologies, society needs such collaborative efforts between ethicists and engineers,” says Dubljević. “Our future depends on it.”

The paper, Ethics in Human–AI teaming: Principles and Perspectives, was published in the journal AI and Ethics and was co-authored by Michael Pflanzer and Zachary Traylor, PhD students at NC State; Chang Nam, a Professor in NC State’s Edward P. Fitts Department of Industrial and Systems Engineering; and Joseph Lyons of the Air Force Research Laboratory. The work was carried out with support from the National Science Foundation and the National Institute for Occupational Safety and Health.

Share

Featured Articles

The Dangers of AI Bias: Understanding the Business Risks

As Google seeks to fix AI bias issues within Gemini, concerns over AI and machine learning biases remain as developers consider combatting inaccuracies

Fujitsu to Combat 5G+ Network Complexities with AI

Fujitsu announces Virtuora IA, a collection of AI-powered network applications that used machine learning models to improve mobile network performance

MWC Barcelona 2024: The Power of AI in the Telco Industry

Mobile Magazine is live at MWC Barcelona 2024 this week! Discover some of the leading trends and businesses that are digitally transforming the industry

Upskilling Global Workers in AI with EY’s Beatriz Sanz Saiz

AI Strategy

Intuitive Machines: NASA's Odysseus bets on Private Company

Data & Analytics

Unveiling Gemma: Google Commits to Open-Model AI & LLMs

Machine Learning