The Possible Danger of AI in Healthcare: Study Urges Caution

Share
The Oxford University findings present a potential risk to patient confidentiality
An Oxford University study warns unregulated AI chatbots could pose ethical concerns within the healthcare sector, citing an urgent need for data privacy

A study published by academics at the University of Oxford have discovered some care providers have been using generative AI (Gen AI) chatbots such as ChatGPT to create care plans for patients.

First reported by The Guardian, the study calls for carers, alongside others working in the healthcare sector, to be cautious of using AI tools that are unregulated. According to the research, it could cause healthcare professionals to act on information that is misleading, inaccurate or biased - causing harm as a result.

With the acceleration of digital transformation, the healthcare industry continues to explore a wide range of use cases for AI technology, prompting calls for safeguarding.

Protecting personal patient data

The artificial intelligence (AI) healthcare market is predicted to grow from US$11bn (2021) to  more than US$187bn by 2030, according to Statista. This increase across the health sector could result in some significant changes to how healthcare providers, hospitals, pharmaceutical and biotechnology companies operate. 

The Oxford University findings present a potential risk to patient confidentiality, according to early career research fellow at the Institute for Ethics in AI at Oxford, Dr Caroline Green, who surveyed organisations of care for the study.

“If you put any type of personal data into [a generative AI chatbot], that data is used to train the language model,” she says. “That personal data could be generated and revealed to somebody else.”

In line with these safety concerns, it could be argued that relying solely on AI to put patient plans in place could result in substandard care, resulting in ethical dilemmas for an organisation.

Protecting sensitive data within a healthcare context is vitally important and so the study calls for healthcare professionals to exercise caution when using AI. Rather than using it to solely complete a task, instead AI can be used in a ‘copilot’ format - bolstering treatment options, using its datasets to support the employee in their decision to help a patient.

In addition to this research, Oxford University is also completing a three-phase project that aims to reduce bias in AI health prediction models which are trained on real-world patient data. 

Youtube Placeholder

As AI is trained and tested via the use of existing data, if its datasets are not wide enough, its ability to learn often reflects existing biases. This can lead to discrimination based on personal data, including protected characteristics such as race, gender and sexuality and more.

The importance of ethical decision-making in AI

Whilst there are inevitable concerns over the use of AI in healthcare decision-making, the technology does hold great potential to help improve patient treatment.

According to NHS guidance, AI is currently used for both X-ray and scan analysis to support radiologists in making assessments quickly, allowing them more time to spend directly with patients.

Additionally, it can offer support to patients in virtual wards with remote monitoring technology. The NHS also states that AI-based technologies are also used for augmented decision-making for health and care treatment decisions - meaning that professionals will make the final decision on patient treatment, but will take AI outputs into consideration.

Stuart Munton, Chief for Group Delivery at AND Digital, says: “With frontline staff under pressure, the case for adopting AI in the healthcare sector is compelling, but this research is another reminder of the risks associated with unchecked technology being allowed to make recommendations for patients. 

“The truth is that on balance AI will bring huge benefits to health professionals in the long term, but this demand needs to be juggled alongside mitigating error, cyber risks, and privacy concerns.”

In recent months, some large technology companies have committed themselves to recognising the potential of AI within the healthcare industry. In particular, ServiceNow recently championed ethical AI systems to offer improved patient outcomes and deliver information to staff to improve care and reduce patient readmission rates.

Likewise, IBM and Boehringer Ingelheim partnered at the end of 2023 to harness foundation models to enable the discovery of novel candidate antibodies for the development of efficient therapeutics.

It is hoped that this novel solution could help to drastically improve patient outcomes worldwide.

************

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

************

AI Magazine is a BizClik brand

Share

Featured Articles

Harnessing AI to Propel 6G: Huawei's Connectivity Vision

Huawei Wireless CTO Dr. Wen Tong explained how in order to embrace 6G to its full capabilities, operators must implement AI

Pegasus Airlines Tech Push Yields In-Flight AI Announcements

Pegasus Airlines has developed its in-house capabilities via its Silicon Valley Innovation Lab to offer multilingual AI announcements to its passengers

Newsom Says No: California Governor Blocks Divisive AI Bill

California's Governor Gavin Newsom blocked the AI Bill that divided Silicon Valley due to lack of distinction between risks with model development

Automate and Innovate: Ayming Reveals Enterpise AI Use Areas

AI Strategy

STX Next AI Lead on Risk of Employing AI Without a Strategy

AI Strategy

Huawei Unveils Strategy To Lead Solutions for Enterprise AI

AI Strategy