With automation being adopted and implemented by businesses globally, organisations are also evolving their use of artificial intelligence (AI).
According to a recent survey by Gartner, 80% of executives think automation can be applied to any business decision.
“The survey has shown that enterprises are shifting away from a purely tactical approach to AI and beginning to apply AI more strategically. For example, a third of organisations are applying AI across several business units, creating a stronger competitive differentiator by supporting decisions across business processes,” said Erick Brethenoux, Distinguished VP analyst at Gartner.
Why is scaling AI still an issue for organisations?
Gartner’s survey identified a gap between the number of AI models developed by organisations and the actual number that make it into production. It reported that, on average, only 54% of AI models move from pilot to production. That figure is just nominally higher than the often-cited 53% that Gartner reported in a 2020 survey.
“Scaling AI continues to be a significant challenge. Organisations still struggle to connect the algorithms they are building to a business value proposition, which makes it difficult for IT and business leadership to justify the investment it requires to operationalise models,” said Frances Karamouzis, Distinguished VP analyst at Gartner.
40% of organisations surveyed indicated that they have thousands of AI models deployed. This creates governance complexity for the organisation, further challenging data and analytics leaders’ ability to demonstrate return on investment from each model.
Balancing security with AI adoption
Although security and privacy concerns were not ranked as a top barrier to AI adoption, 41% of organisations reported they have previously had a known AI privacy breach or security incident.
When asked which parties the organisation was most worried about when it comes to AI security, 50% of respondents cited concerns about competitors, partners or other third parties, and 49% were concerned about malicious hackers. However, among organisations who have faced an AI security or privacy incident, 60% reported data compromise by an internal party.
“Organisations’ AI security concerns are often misplaced, given that most AI breaches are caused by insiders. While attack detection and prevention are important, AI security efforts should equally focus on minimising human risk,” said Brethenoux.