Are organisations evolving their use of AI for automation?

A new Gartner survey has found that organisations are adopting AI to help with automation, but multiple challenges still remain

With automation being adopted and implemented by businesses globally, organisations are also evolving their use of artificial intelligence (AI).  

According to a recent survey by Gartner, 80% of executives think automation can be applied to any business decision.

“The survey has shown that enterprises are shifting away from a purely tactical approach to AI and beginning to apply AI more strategically. For example, a third of organisations are applying AI across several business units, creating a stronger competitive differentiator by supporting decisions across business processes,” said Erick Brethenoux, Distinguished VP analyst at Gartner.

Why is scaling AI still an issue for organisations?

Gartner’s survey identified a gap between the number of AI models developed by organisations and the actual number that make it into production. It reported that, on average, only 54% of AI models move from pilot to production. That figure is just nominally higher than the often-cited 53% that Gartner reported in a 2020 survey.

“Scaling AI continues to be a significant challenge. Organisations still struggle to connect the algorithms they are building to a business value proposition, which makes it difficult for IT and business leadership to justify the investment it requires to operationalise models,” said Frances Karamouzis, Distinguished VP analyst at Gartner. 

40% of organisations surveyed indicated that they have thousands of AI models deployed. This creates governance complexity for the organisation, further challenging data and analytics leaders’ ability to demonstrate return on investment from each model.

Balancing security with AI adoption 

Although security and privacy concerns were not ranked as a top barrier to AI adoption, 41% of organisations reported they have previously had a known AI privacy breach or security incident.

When asked which parties the organisation was most worried about when it comes to AI security, 50% of respondents cited concerns about competitors, partners or other third parties, and 49% were concerned about malicious hackers. However, among organisations who have faced an AI security or privacy incident, 60% reported data compromise by an internal party.

“Organisations’ AI security concerns are often misplaced, given that most AI breaches are caused by insiders. While attack detection and prevention are important, AI security efforts should equally focus on minimising human risk,” said Brethenoux.

Share

Featured Articles

ICYMI: Visual search engine future and OpenAI’s new ChatGPT

A week is a long time in artificial intelligence, so here’s a round-up of the AI Magazine articles that have been starting conversations around the world

OpenAI’s new ChatGPT release has got the Internet talking

OpenAI’s conversational chat platform has taken the Internet by storm this week, but the company says work was required to refuse inappropriate requests

Spenser Skates: building a unicorn company out of the ashes

Amplitude CEO Spenser Skates on founding the unicorn company out of a voice recognition app, and solving the problem of knowing what users want

Good Things: Will wearables transform care in the community?

Technology

AI tools of the trade to help design faster, cheaper chips

Technology

Machine learning critical for trade surveillance, say banks

Machine Learning