Security execs say human data in AI raises question of trust

Organisations focus primarily on compliance to build trust, new report claims, but consumers value transparency most, and business leaders need to open up

Almost all security professionals agree organisations need to do more to reassure customers about how artificial intelligence works with user data, new research has revealed.

Cisco’s 2023 Data Privacy Benchmark Study is the company’s sixth annual global survey investigating professionals' perspectives on data privacy strategies. This year's study finds that despite a difficult economic environment, organisations continue to invest in privacy, with spending up from US$1.2 million just three years ago to US$2.7 million this year. 

Yet, 92% of respondents believe their organisation needs to do more to reassure customers about their data. The survey also finds that organisations' privacy priorities differ from those of consumers.

The study finds a significant disconnect between data privacy measures by companies and what consumers expect from organisations, especially regarding how organisations apply and use AI.

Compliance is not enough to build trust

The Cisco 2022 Consumer Privacy Survey showed 60% of consumers are concerned about how organisations apply and use AI today, and 65% already have lost trust in organisations over their AI practices. Consumers also said the top approach for making them more comfortable would be to provide opportunities for them to opt out of AI-based solutions. The privacy benchmark shows providing opt-out opportunities was selected least (22%) among the options organisations would put in place to reassure consumers.

"When it comes to earning and building trust, compliance is not enough," says Harvey Jang, Cisco Vice President and Chief Privacy Officer. Transparency was the top priority for consumers (39%) to trust companies, whilst organisations surveyed felt compliance was the number one priority for building customer trust (30%).

Even though 96% of organisations believe they have processes in place to meet the responsible and ethical standards that customers expect for AI-based solutions and services, 92% of respondents believe their organisation needs to do more to reassure customers about their data.

Over 70% of organisations surveyed indicated they were getting "significant" or "very significant" benefits from privacy investments, such as building trust with customers, reducing sales delays, or mitigating losses from data breaches. On average, organisations are getting benefits estimated to be 1.8 times spending, and 94% of all respondents indicated they believe the benefits of privacy outweigh the costs overall.

With privacy as a critical business priority, more organisations recognise that everyone across their organisation plays a vital role in protecting data. This year, 95% of respondents said that "all of their employees" need to know how to protect data privacy.  

"An organisation's approach to privacy impacts more than compliance," says Dev Stahlkopf, Cisco Executive Vice President and Chief Legal Officer. "Investment in privacy drives business value across sales, security, operations, and most importantly, trust."

Share

Featured Articles

ChatGPT “goes rogue” with some help from Do Anything Now DAN

“Jailbreak” prompts have emerged as new research from BlackBerry claims conversational AI is probably already being used for state-sponsored cyber attacks

Google hunts for a new way to search with ChatGPT rival Bard

Google says its Language Model for Dialogue Applications experimental conversational AI service is an essential next step for the search engine company

Artificial intelligence helps in hunt for synthetic blood

Sophisticated AI deployed on state-of-the-art experimental platforms is being used to create artificial blood products funded by DARPA in the United States

ICYMI: OpenAI spots fakes and Saudi Arabia’s OffWorld robots

Technology

AI “virtuous circle” could help in battle against cybercrime

AI Strategy

OpenAI helps spot AI text before it gets used for cheating

AI Strategy