Could ‘Robotics Anxiety’ Be Hindering AI Adoption?

By Laura Berrill
AI Magazine talks to Oded Karev, General Manager of NICE Advanced Process Automation, NICE

Where are we in terms of AI and automation right now? 

We’re at the stage where interacting with AI-powered bots is becoming a part of our daily lives: they remove the junk from our inboxes, answer our customer queries and even direct us to the relevant human when what we’re asking is out of their remit. And, with uncertainty continuing to loom, the adoption of hyper automation software doesn’t look to slow any time soon. According to Gartner, hyper automation will reach $596.6 billion in 2022. This is up from $481.6 billion in 2020 and a projected $532.4 billion this year.

How is this trend being driven?

Driving this trend, in part, is Robotic Process Automation (RPA). Organisations are rolling out RPA-enabled software at lightning speed for all types of use cases – including supporting customer service agents with complex processes and repetitive work – across all sectors. Yet, as the installed base gets bigger, we are beginning to see the rate of RPA adoption slow, at least among employees. This is down to the emergence of ‘Robotics Anxiety’.

Why are we worried and is it justified?

Faced with more and more AI-powered technologies, and little guidance on how to manage them, employees at every level and in every department are asking questions like ‘are robots here to take our jobs?’ The same can be said for customers, who worry that granting bots access to their data will lead us to living out the plots of science fiction movies.

Of course, some of this anxiety is justified, and some is not. But what prevails is our innate fear of the unknown – and who can blame us? Despite near-ubiquity, no regulatory framework presently exists nationally or globally for AI or robotics. To date, very little is holding those designing, developing, or using robotics to account when it comes to ethical standards. This lack of regulation is having a knock-on effect. In the UK, it was revealed that half of British companies aren’t currently using AI at all in a report published by Microsoft earlier this year. Business and technology leaders cannot ignore this.

How do businesses adopt the technology in the most effective way?

Digital transformation is imperative, and the proliferation of AI technologies like RPA are critical to its success.  Yet with a lack of global or national guiding principles, at least for now, leaders must take matters into their own hands and begin to restore confidence through educating employees on how to ethically develop and use AI-powered robotics. Looking to external ethical frameworks to guide thinking and partnering with software vendors that promote responsibility in design, creation, and development is a great place to start.

For example, Asimov’s Three Principles inspired the NICE Robo Ethical Framework that now underlies every interaction with process robots. From planning to implementation, we live and breathe this doctrine to drive ethically sound human-robot partnerships in the workplace. The goal is to ensure that robots are not only boosting organisation performance but also employee and customer experiences.

Critically, employees must be empowered to raise concerns around the development or use of AI-powered robotics. Only then will you have a clear view of how much robotics anxiety exists across your organisation and be able to map out a tailored response that addresses the needs of all.

The importance of ethics

But this cannot be tackled alone. Change needs to happen quick and fast and led by national governments, supranational organisations, and the respective leaders in robotics and AI. We may not be consciously designing robotics for harm, but the plots of Blade Runner or The Terminator must remind us that ensuring your organisation is adhering to ethical standards isn’t a bad thing. 

 

Share

Featured Articles

Securing the future of IoT with AI biometric technology

The world needs an IoT future, so It’s time to forget your password – for good. A new breed of AI-powered biometric security measures is required

EU networks plan to build a foundation for trustworthy AI

Artificial intelligence technologies are in their infancy, but commercial stakeholders must build them on foundations of trust, say research experts

ICYMI: Power users distrust AI and new National Robotarium

A week is a long time in artificial intelligence, so here’s a round-up of AI Magazine articles that have been starting conversations around the world

Reducing the impact of ecommerce with AI and ML

AI Applications

Now is the time for intelligent products and services

AI Strategy

The eyes of a machine: automation with vision

Machine Learning