Study shows the rewards of utilising responsible AI
Organisations that use artificial intelligence (AI) responsibly will benefit over time, suggests research by Accenture.
In a recent report, The Art of AI Maturity, Accenture identified a small group (12%) of high-performing organisations which are using AI.
These “AI Achievers”, the survey says, are already generating 50% more revenue growth versus their peers, outperforming other groups on customer experience (CX) and Environmental, Social and Governance (ESG) metrics.
And with AI expected to add more than $15 trillion to the global economy by 2030, responsible AI is the leading priority among industry leaders for AI applications in 2021, with emphasis on improving privacy, explainability, bias detection, and governance.
Responsible AI a leading priority for organisations
According to the Responsible AI Institute, when not designed in a thoughtful and responsible manner, AI systems can present a significant risk of financial and reputational harm for companies that haven't thought through their strategies and roadmaps.
As companies deploy AI for a growing range of tasks, adhering to laws, regulations and ethical standards will be critical to building a sound AI foundation. 80% of companies plan to increase investment in Responsible AI, and 77% see regulation of AI as a priority.
Meanwhile most companies (69%) have started implementing Responsible AI practices, but only 6% have operationalised their capabilities to be responsible by design.
Being responsible by design will become more beneficial for organisations over time, says Accenture, especially as governments and regulators consider new standards for the development and use of AI. Countries such as the United Kingdom, Brazil, and China are already taking action, either by evolving existing requirements related to AI (for example, in regulation such as GDPR), or through the development of new regulatory policy.
The role of regulation
The research, which surveyed 850 C-suite executives across 17 geographies and 20 industries, shows that awareness of AI regulation is generally widespread and that organisations are well-informed.
Of those surveyed, nearly all (97%) respondents believe that regulation will impact them to some extent, while 95% believe that at least part of their business will be affected by the proposed EU regulations specifically.
However, most organisations have yet to turn these favourable attitudes and intentions into action. Just 6% of organisations have built their Responsible AI foundation and put their principles into practice.
A majority of respondents (69%) have some dimensions in place but haven’t operationalised a robust Responsible AI foundation, while 25% of respondents have yet to establish any meaningful Responsible AI capabilities.
The biggest barrier lies in the complexity of scaling AI responsibly — an undertaking that involves multiple stakeholders and cuts across the entire enterprise and ecosystem.
Data and analytics leaders ‘must understand responsible AI’
With its publication of Introducing Responsible Cisco AI, Cisco explained this month how the company is specifically meeting the tenets of AI governance covered in Gartner Research’s report, Innovation Insight for Bias Detection/Mitigation, Explainable AI and Interpretable AI. The Gartner report advised “data and analytics leaders must understand responsible AI” to “facilitate understanding, trust, and performance accountability required by stakeholders.”
According to Accenture, being responsible by design can help organisations clear those hurdles and scale AI with confidence. By shifting from a reactive AI compliance strategy to the proactive development of mature Responsible AI capabilities, they’ll have the foundations in place to adapt as new regulations and guidance emerge. That way, businesses can focus more on performance and competitive advantage.