While 84% of global executives believe responsible AI (RAI) should be on top management agendas, only 25% have comprehensive RAI programs in place, as shown in a study published by MIT Sloan Management Review (MIT SMR) and Boston Consulting Group (BCG).
The report, To Be a Responsible AI Leader, Focus on Being Responsible, was conducted to assess the degree to which organisations are addressing RAI, and was based on a global survey of 1,093 from 22 industries and 96 countries, as well as insights gathered from an international panel of more than 25 AI experts.
Less than a quarter have fully implemented responsible AI program
Nearly a quarter of survey respondents report that their organisation had experienced an AI failure, ranging from mere lapses in technical performance to outcomes that put individuals and communities at risk. RAI initiatives seek to address the technology's risks by proactively addressing the impact on people. Despite the clear necessity for RAI, less than one-quarter of organisations have a fully implemented program.
"Our research reveals a gap between aspirations and reality when it comes to responsible AI, but that gap also presents an opportunity for organisations to become leaders on this issue," said Elizabeth M. Renieris, a senior research associate at Oxford's Institute for Ethics in AI, an MIT SMR guest editor, and a coauthor of the report.
"By taking a more expansive view of their stakeholders and viewing RAI as an expression of their deeper corporate culture and values, organisations stand better equipped to ensure that their AI systems promote individual and societal welfare."
How industry stakeholders in Africa and China adopt RAI
BCG and MIT SMR conducted dedicated surveys in Africa and China to understand how industry stakeholders in these key geographies approach RAI. Most respondents in Africa (74%) agree that RAI is on their top management agendas, and 69% agree that their organisations are prepared to address emerging AI-related requirements and regulations. In Africa, 55% of respondents report that their organisations' RAI efforts have been underway for a year or less (with 45% at 6 to 12 months, and 10% at less than six months). In China, 63% of respondents agree that RAI is a top management agenda item, and the same percentage agree that their organisations are prepared to address emerging AI requirements and regulations. China appears to have longer-standing efforts around RAI, with respondents reporting that their organisations have focused on RAI for one to three years (39%) or more than five years (20%).
Responsible AI initiatives often lag behind strategic AI priorities
The corporate adoption of AI has been rapid and wide-ranging across organisations in all industries and sectors. MIT SMR and BCG's 2019 report on AI and business strategy found that 90% of companies surveyed had made investments in the technology. But the adoption of RAI has been limited, with just over half of the 2022 survey respondents (52%) reporting that their organisations have an RAI program in place. Of those with an RAI program, a majority (79%) report that the program's implementation is limited in scale and scope. More than half of respondents cited a lack of RAI expertise and talent (54%) and a lack of training or knowledge among staff members (53%) as key challenges that limit their organisation's ability to implement RAI initiatives.
"As organisations rush to adopt AI, it can bring with it unintended risks to individuals and communities, highlighting the critical importance of operationalizing responsible practices," said Steven Mills, global GAMMA chief AI ethics officer at BCG and a coauthor of the report. "True leaders in RAI are, at their core, responsible businesses. For these frontrunners, RAI is less about focusing on a particular technology and instead is a natural extension of their purpose-driven culture and focus on corporate responsibility."