Swiss Re: Pharma, Not IT, to See Most Adverse Effects of AI

Share
Many would have anticipated that the disruptive technology would have the most significant impact on the IT services industry
Swiss Re' AI report revealed surprising results showing pharmaceuticals stands to be the most adversely effected industry from the applications of AI

According to a recent study by insurance giant Swiss Re, the pharmaceuticals sector is expected to face the most adverse effects from AI over the next decade. 

This finding comes as a surprise, as many would have anticipated that the disruptive technology would have the most significant impact on the IT services industry.

While the study reveals that IT services are currently the most affected by AI risks due to their pioneering role in the field, this is set to change as the use of technology becomes more widespread across all industries. 

The report suggests that while IT services rank first and pharmaceuticals third in terms of AI risks from 2024-2025, the sectors will flip positions by 2032, with pharmaceuticals taking the top spot and IT services descending to fourth.

AI's offerings to healthcare​​​​​​​

Youtube Placeholder

Swiss Re's research highlights AI's immense potential benefits but also concludes that healthcare and pharmaceuticals face the greatest risks from flawed AI systems over the next ten years. 

These industries stand to be among the biggest gainers and adopters of AI, with a study by Tata Consultancy Services showing that more than half of healthcare businesses expect AI technology to help double productivity in the coming years.

AI can be used in healthcare and pharmaceuticals for functions like patient monitoring, diagnosis, drug development, and administration. 

However, the study warns that flawed or biased AI systems can lead to misdiagnosing conditions, resulting in illness or loss of life.

The World Health Organization (WHO) echoes these concerns, stressing that while AI has the ability to "vastly improve diagnostics and treatments," developers, regulators, and healthcare providers must "fully account for the associated risks."

But as Rohit Malpani of the WHO's Research for Health Department, put it bluntly: "There is no free lunch. There are different risks associated with each use case."

AI risks in healthcare

Bias amplified through training data is a major pitfall the WHO flags. If an AI system's data disproportionately represents certain demographics, it can perpetuate discrimination against underrepresented groups. 

The threat of employment disruption and the need to retrain clinicians as AI automates some roles are additional challenges.

While generative AI chatbots like ChatGPT offer useful clinical applications, such as easier information retrieval or aids in analysis of patient problems from file information, risks like propagating misinformation and undermining data protections must be mitigated.

RSM UK, Partner Clive Makombera warns that the opportunities must be balanced with the risk as the healthcare and health sector wades into further AI usage. 

Clive Makombera, RSM UK Partner

“As with any nascent technology, there are often grey areas which require further understanding and clear guidance,” Makombera told Healthcare Digital. “Proceeding with caution and identifying the potential hurdles posed by things like data protection, risk of disinformation, data sharing and investment risk is advisable.”

To mitigate some of these risks, Makombera argues AI must be seen as a supporting role, rather than a decision maker.

Insurers recognising the risk

Recognising the potential risk, the insurance industry has a key role in supporting responsible AI adoption, according to the Swiss Re report.

Insurers are starting to craft new products covering AI performance failures – a top cross-industry vulnerability. As a "shock absorber," insurance can help build the digital trust to fully harness AI's potential while providing protection for patients.

This is increasingly important as AI regulation in healthcare currently lags behind the pace of innovation, according to the WHO publication. 

Therefore, the health agency launched a raft of new guidance aimed at safeguarding patient privacy, validating data pipelines, managing risk, and ensuring transparency. 

"Artificial intelligence holds great promise for health but also comes with serious challenges, including unethical data collection, cybersecurity threats, and amplifying biases or misinformation,” Dr. Tedros Adhanom Ghebreyesus, Director-General at WHO said following the announcement. “This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis while minimising the risks."

******

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

******

AI Magazine is a BizClik brand

Share

Featured Articles

How American Airlines Used AI to Clean up the Sky

Google Research, American Airlines, Bill Gates’ Breakthrough Energy and Eurocontrol have used an AI solution to cease aviation contrails

AI in Education: D2L Ed Director Talks Transforming Learning

Rob Telfer, Director of Higher Education at D2L, explains how AI is transforming education—enhancing accessibility, efficiency, and personalisation

AWS and the Alliance Made to Make Gen AI Integration Easier

AWS has announced an imitative to help organisations role out their Gen AI ambitions with the Gen AI Partner Innovation Alliance

Project Jarvis: Google’s AI That Will Browse the Web for You

AI Applications

Balfour Beatty, Microsoft and AI's Potential in Construction

AI Applications

The UK’s Plan for Business to Build AI Safely

AI Strategy