Ethical AI a matter of life or death for medical “carebots”

A team of North Carolina State University researchers has developed plans for a set of algorithms that could allow a future “carebot” to make complex decisions about prioritising the treatment of human patients.
The new algorithms are designed to help incorporate ethical guidelines into artificial intelligence decision-making programs such as virtual assistants known as carebots used in healthcare settings, says the research team.
“Technologies like carebots are supposed to help ensure the safety and comfort of hospital patients, older adults and other people who require health monitoring or physical assistance,” says paper author Veljko Dubljević and an Associate Professor in the Science, Technology & Society program at NCSU. “In practical terms, this means these technologies will be placed in situations where they need to make ethical judgments.”
Dubljević presents an example in which a carebot is given the task of giving medical assistance to two people. One patient is unconscious, but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first.
“How does the carebot decide which patient is assisted first?” asks Dubljević. “Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?”
While most decisions require a focus on outcomes and consequences, the researchers point to two factors considered by humans who introduce moral judgements to the process.
The first is the intent of an action and the character of the individual performing this action - who is performing the action and what are they trying to accomplish? The second factor is the action itself, say researchers, as people can view certain actions, such as lying, as inherently bad.
Complexities arise when these factors interact, say the team. For example, lying may be bad, but if a nurse lies to a patient making unnecessary demands in order to prioritise the treatment of a patient in more urgent need, most humans would see this as morally acceptable, so no longer a bad thing.
New model opens up opportunities for human-AI teams technology
Researchers developed a mathematical formula and series of related decision trees which can be used in AI programs to create an Agent, Deed, and Consequence (ADC) Model, developed by Dubljević and colleagues to reflect how people make complex, real-world ethical decisions.
Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making, says Dubljević. “Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies.
“Our goal here was to translate the ADC Model into a format that makes it viable to incorporate into AI programming. We’re not just saying that this ethical framework would work well for AI, we’re presenting it in language that is accessible in a computer science context. With the rise of AI and robotics technologies, society needs such collaborative efforts between ethicists and engineers,” says Dubljević. “Our future depends on it.”
The paper, Ethics in Human–AI teaming: Principles and Perspectives, was published in the journal AI and Ethics and was co-authored by Michael Pflanzer and Zachary Traylor, PhD students at NC State; Chang Nam, a Professor in NC State’s Edward P. Fitts Department of Industrial and Systems Engineering; and Joseph Lyons of the Air Force Research Laboratory. The work was carried out with support from the National Science Foundation and the National Institute for Occupational Safety and Health.
- VMware report raises concerns of healthcare AI data storageData & Analytics
- NHS invests £21 million in AI for diagnosis and treatmentAI Applications
- AI godfather Yoshua Bengio feels lost after AI powers surgeAI Strategy
- Machine learning predicts heart failure for early treatmentMachine Learning