FICO and Corinium release State of Responsible AI report

The State of Responsible AI reveals the current attitudes towards ethical, responsible, and trustworthy AI, showing practices in place in organisations

FICO, a global analytics software firm, released its State of Responsible AI report from market intelligence firm Corinium which found that despite the increased demand and use of AI tools, almost two-thirds (65%) of respondents' companies can't explain how specific AI model decisions or predictions are made.  

This global survey of 100 C-level data and analytics leaders revealed the different issues AI-focused executives are considering and tackling to prepare their organisations to be AI-enabled in an ethical way. 

The study found that the lack of awareness of how AI is being used and whether it's being used responsibly is concerning as 39% of board members and 33% of executive teams have an incomplete understanding of AI ethics.

How can businesses combat AI bias? 

The survey found that currently, only a fifth of respondents (20%) actively monitor their models in production for fairness and ethics, while less than a quarter (22%) say their organisation has an AI ethics board to consider questions on AI ethics and fairness. One in three (33%) have a model validation team to assess newly developed models and only 38% say they have data bias mitigation steps built into model development processes.

However, evaluating the fairness of model outcomes is the most popular safeguard in the business community today, with 59% of respondents saying they do this to detect model bias. Additionally, 55% say they isolate and assess latent model features for bias, and half (50%) say they have a codified mathematical definition for data bias and actively check for bias in unstructured data sources.

The majority of businesses (90%) agree that inefficient processes for model monitoring represent a barrier to AI adoption. 63% of respondents believe that AI ethics and responsible AI will become a core element of their organisation's strategy within two years.

"Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level," said Scott Zoldi, Chief Analytics Officer at FICO. "Organisations are increasingly leveraging AI to automate key processes that - in some cases - are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible."

The report highlights practices that will help organisations plan a route towards responsible AI, including: 

  • Establishing practices that protect the business against reputational threats from irresponsible AI use
  • Balancing the need to be responsible with the need to bring new innovations to market quickly
  • Securing executive support for prioritising AI ethics and responsible AI practices
  • Futureproofing company policies in anticipation of stricter regulations around AI
  • Securing the necessary resources to ensure AI systems are developed and managed responsibly
Share

Featured Articles

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Qualtrics has appointed Microsoft veteran Gurdeep Singh Pall as its new President of AI Strategy to transform the company’s AI offerings for customers

Should Tech Leaders be Concerned About the Power of AI?

With insights from Blackstone CEO Steve Schwarzman, we consider if tech leaders are right to be anxious about AI innovation and if regulation is necessary

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

Machine Learning

Meta Launches AI Tools to Protect Against Online Image Abuse

AI Applications