The need for ethical AI to avoid a trust gap with customers

GenAI offers game changing benefits, but the technology is not without its risks. Organisations must consider the ethics of AI to avoid a ‘trust gap’

As the adoption of AI globally continues to accelerate, never has the issue of how to ensure ethical use of AI been more prominent. 

As the use of generative AI increases - research by Salesforce shows that three out of five workers (61%) currently use the technology - knowledge gaps of how to use it responsibly continue. Half of the workers surveyed by the CRM leader said they worry generative AI outputs are inaccurate, while 59% worry the outputs are biased.

Generative AI clearly offers game changing benefits, but the technology is not without its risks either. As explained by QuantumBlack - McKinsey & Company’s AI consultancy arm - technology leaders must design their teams and processes to mitigate those risks from the start: not only to meet fast-evolving regulatory requirements but also to protect their business and earn consumers’ digital trust.

This is reflected by research by Salesforce, which found that without taking action businesses could soon see what it describes as an ‘AI trust gap’ with customers. As brands increasingly adopt AI to increase efficiency and meet rising customer expectations, nearly three-quarters of respondents said they are concerned about the unethical use of the technology.

Interestingly, the company’s research found that consumers have become much less open to using AI over the last year - highlighting concerns about the ethical use of AI. While 73% of business buyers and 51% of consumers are open to the use of AI to improve their experiences, those figures have dropped significantly since last year’s survey. Companies therefore have an opportunity to close this gap by implementing ethical guidelines and providing better visibility into how the technology is applied.

In addition to insights on generative AI, Salesforce’s sixth State of the Connected Customer report revealed evolving influences on purchase decisions and what customers look for from marketing, commerce, sales, and service interactions.

Salesforce survey results show a clear distinction between customers’ overall trust in companies and their faith that those companies will take advantage of new AI innovations responsibly. For example, while 76% of customers trust companies to make honest claims about their products and services, only 57% trust them to use AI ethically.

“Ethical AI is a pressing concern for our customers and for our customers’ customers,” said Kathy Baxter, Principal Architect, Responsible AI & Tech at Salesforce. “Getting it right means creating AI with trust at the centre of everything you do. That means gathering data with transparency and consent, training algorithms on diverse data sets, and never storing customer information insecurely.”

Tech leaders remain wary of the ethical considerations of AI

Today, technology leaders are fully embracing the opportunities created by AI. However EY’s latest CEO Outlook Pulse survey found that executives remain wary of its consequences - not least ethical concerns. 

Nearly two-thirds (65%) of CEOs told EY that AI is a force for good, but a near equal proportion say more work is needed to address social, ethical and security risks – which range from cyberattacks and disinformation to deep fakes. 

The survey also found that 66% of CEOs believe the impact of AI replacing humans in the workforce will be counterbalanced by new roles and career opportunities that the technology creates.

"CEO concerns about the unintended consequences of AI reflect a broader confluence of – sometimes dystopian - views in media, society, and contemporary culture,” comments Andrea Guerzoni, EY Global Vice Chair – Strategy and Transactions. “They see a role for business leaders to address these fears – an opportunity to engage on the ethical implications of AI and how its use could impact key areas of our lives, such as privacy.”

Ensuring the safe development of AI products

As Eduardo Azanza, CEO at Veridas, explains, with the rapid adoption of AI technology, ensuring that it is responsibly and ethically used by businesses is essential.

Highlighting this, earlier this year the World Ethical Data Foundation released a framework for developing AI products safely - hoping to tackle some of these challenges.

“For newer forms of AI to be successful in the long term, there needs to be trust from customers and the wider public,” commented Azanza. “Organisations looking to implement AI should strongly consider following the framework released by the World Ethical Data Foundation, or the ones released by NIST and the NCSC.

“AI technology should always be based on transparency and compliance, not only with legal and technical standards but also with the highest ethical values. As AI continues to see exponential growth, it is extremely important that organisations rigorously audit and document the risk analysis of their systems.

“Ensuring that technologies comply with stringent privacy standards and frameworks not only keeps the business itself secure but also gains the trust of customers, knowing that AI is used ethically and responsibly.”

A lack of diversity in tech

Without human intervention, AI could be reinforcing some damaging societal bias. “People are biased – mostly in an unconscious way – so it is possible that, without a careful methodology, such biases are embedded in both the data and in any other developer’s decisions in building an AI system,” Francesca Rossi, IBM Fellow and AI Ethics Global Leader, told us earlier this year.

On this topic, in a fireside chat at this year’s Tech LIVE Virtual London, Nayur Khan, Partner at QuantumBlack, discussed a range of topics around ethical AI: from the biggest ethical challenges around AI and how they should be managed to the need for diversity when it comes to building AI.

“There is a lack of diversity in tech generally,” he said. “22% of tech roles across Europe are women. In the US, less than 5% of software developers are black. There's a problem there. 

“Yes, we can do advanced maths and yes, we can create these new checks and guardrails. However, there is a lack of diversity. So if you're building AI, the first question I'd ask is, are you building something that's going to impact people's lives?

“Do you have a team in place that is representative of the AI that you're building that's going to be used by individuals? 

“Generally, we talk about this a lot. There is a lot of diverse talent out there and that has benefits from an economic point of view, but from an AI point of view what makes it even more important is that you're bringing those different perspectives into the discussion.”

Share

Featured Articles

AMD at 55: Strategy is Powering Advancements in AI

With the AI chips market booming and set to grow to US$67bn in 2024, AMD is positioning itself for the new AI era as it celebrates its 55th birthday

Anthropic Claude AI Chatbot is Now Available as an iOS app

Anthropic launches an iOS app for Claude AI Chatbot to bring conversational AI to the iPhone, in addition to its new Team Plan for enterprise AI users

Amazon Q: Empowering Enterprise Productivity with AI

AWS announces the general availability of Amazon Q, its most capable Gen AI assistant for accelerating software development and leveraging enterprise data

Microsoft and Estée Lauder Power Gen AI for Beauty Industry

Data & Analytics

Pope set to Attend G7 Summit and Highlight AI Challenges

AI Strategy

Sundar Pichai: Google Seeks to Expand its AI Opportunities

AI Strategy