Employees fear robotics disruption following cyberattacks

With cyberattacks becoming more creative, Kaspersky has found that half of all employees believe production will be halted for weeks after a cyberattack

The majority of employees are not ready to provide robots with full autonomy, a Kaspersky study has revealed, with the majority of respondents believing it could take days for businesses to recover in the event of a cyberattack.

The research on the consequences of automation and increased use of robotics and other AI-based machines found most don’t believe production processes run by robots can be recovered immediately in case of a cyberattack or malfunction, with 78% believing full recovery will take a few days or longer.

Cybercriminals can harm businesses via emails, virtual spaces, endpoints, and embedded systems. And they are constantly searching for new ways to attack, a fact companies should bear in mind when implementing new technologies such as robotics into their business processes. On the one hand, they increase production efficiency, but on the other, they raise serious cybersecurity concerns regarding their safety and reliability. 

On this basis, Kaspersky conducted a survey to gather opinions on robots in the workplace from people working in specific industries and to understand if there are any challenges or security issues that arise when robots are introduced into the workplace.

According to this study, most employees in companies with functioning production robots are aware of possible cybersecurity risks. Only 13% of them believe disabled robots can be fixed immediately in case of a cyberattack. More than half (52%) expect that recovery operations would take a few weeks or longer, and more optimistic respondents (26%) are of the opinion that a return to normal production processes could happen within a few days.

Concerns about lack of control and regulation by third parties

Another important finding of the survey shows people are concerned about the lack of control and regulations by third parties when it comes to robots and their autonomy.

Keeping in mind the risks that rapid robotisation may cause, more than half (60%) of respondents say that it’s unclear who takes ultimate responsibility if robots fail during an equipment malfunction or a cyberattack. This is one of the reasons why employees want to keep leadership positions for people. The majority of respondents (67%) believe that robots can increase production efficiency but only with human oversight. Just a quarter of employees (24%) are ready to trust the management of any production process completely to an AI robot.

“In this study we asked respondents to assess not only the level of companies’ robotization but also their ability to resist related cyber risks” comments Andrey Suvorov, Head of KasperskyOS Business Unit. “It turned out that many employees had mixed feelings when assessing how protected robots are. They are confident that it’s necessary to pay more attention to their security and are sceptical about how quickly a robot can recover after a cyber incident. 

“In fact, we face concerns about the modern industrial IoT system's proper work and protection, with all the variety of complicated smart devices inside. That’s why we offer Cyber Immune solutions to protect specific enterprise units or the entire IT-system, making industrial robots, ICS machines or autonomous vehicles immune to most cyberattacks without using applied security tools.

The full report with more insights on the consequences of automation and the increased use of robots is available here.


Featured Articles

ICYMI: OpenAI spots fakes and Saudi Arabia’s OffWorld robots

A week is a long time in artificial intelligence, so here’s a round-up of the AI Magazine articles that have been starting conversations around the world

AI “virtuous circle” could help in battle against cybercrime

New research by security company Darktrace and IDC says companies need to adopt a holistic approach if they are to successfully prepare for a cyber attack

OpenAI helps spot AI text before it gets used for cheating

OpenAI’s AI Text Classifier aims to spot content generated by AI platforms before it can be used by bad actors, but the company admits it's not perfect

OffWorld takes robot swarms to Ma’aden mines in Saudi Arabia


New online degree promises to open up AI education for all

AI Strategy

ICYMI: Microsoft’s plans for quantum and Open AI investment