Successful adoption of AI relies on a foundation of trust

By Sofia Ihsan, Trusted AI Lead at EY and Laura Henchoz, Emerging Tech Markets Leader at EY
Despite its huge potential, the successful adoption of AI is being stilted by a lack of trust. Transparency and accountability are needed

Artificial Intelligence (AI) is a powerful technology that is already having a transformational impact across industries and in our personal lives.

We interact with AI systems daily – for many, without knowing – from using built-in smart assistants on our mobile devices to receiving personalised ads from brands or fraud alerts from our banks.

Despite its huge potential, the successful adoption of AI is being stilted by a lack of trust. AI, for example, is often introduced without the knowledge of end-users, which is a particular concern when it uses personal data without consent or transparency from the developers as to how it is being used.

In addition, there is heightened awareness and, unsurprisingly, some concern that AI can “go wrong” given recent high-profile examples where the outcomes suggested by AI have been biased or unfair, such as the A-Level prediction algorithm. Therefore, trust in AI needs to be built in order to increase adoption and that must include appropriate care around data privacy, transparency and accountability. End users should have a clear process to challenge and correct errors and security. Alongside this, developers of AI need to consider the human and societal impacts around the autonomy of the individual, the presence of bias, and the safety of those exposed to AI.

Due to the sophistication of AI algorithms and lack of transparency around data collection, it can be easy to overlook some of the basic issues surrounding trust, as AI systems contain advanced capabilities designed to help humans make better informed and faster decisions. When implementing their AI strategies, business leaders should focus on encouraging ethical and trusted AI by design and putting people at the centre of solutions to ensure they’re achieving the best of both human and artificial intelligence.

As businesses look to harness the power of AI, they must evaluate the concerns around trust, bias and ethics – exploring ways to maximise the potential of AI in the most human-centric and transparent way to build trust.

The trust gap

Without a robust governance and ethical framework in place, AI can impose serious risks through the decisions it makes. System failures can have profound consequences for security, decision-making and credibility, further impacting trust, and potentially leading to costly lawsuits, reputational damage or customer backlash.

Many companies are currently using AI in low-risk areas with human oversight, while the technology is still in relative infancy. Over time, as use cases increase along with its capabilities, AI will be relied upon for making more decisions, and in more high-risk areas. This requires leaders to place greater importance on building a foundation of trust for their customers to ensure that not only are they comfortable sharing their data, but that systems are refined as they become more intelligent. 

There is also a growing trend for businesses to rely on data, algorithms and AI models that originate from outside the company, therefore adding to the complexity of building trust as it is reliant upon additional checks and balances to ensure the data and systems are appropriate for use. Put simply, there’s a push and pull between benefits and concerns around AI that is leading to a gap in trust – and this is, crucially, what needs to be addressed and managed.

The steps to building trust

Creating a robust framework to manage the risks imposed by AI is critical. At EY we have worked to establish three key steps to building trust in AI: transparency and the ability to explain outcomes as required, understanding and mitigating biases and protecting data and privacy,

1. Transparency and providing an explanation

It is important from the outset to keep an accurate inventory of where AI applications are being used and communicate this to affected users (customers and employees).  An inventory needs to be accompanied by a framework to help understand the relative risk/impact of each AI application so that appropriate measures can be put in place to manage these through the development or procurement process and monitored after deployment.   

Organisations should also be able to explain how AI will use and interpret data, the AI application’s decision framework as appropriate, and the consistency of its decisions. This should be documented and readily available to review, challenge and validate throughout the development and use of the AI system. 

2. Understanding biases

People exposed to AI must be educated on its potential flaws, such as biases, and kept up-to-date on how the business is working to reduce or eliminate bias and promote fairness. Discriminatory or unfair algorithms and data sets can be highly damaging to some of society’s most vulnerable populations and further contribute to a lack of trust in the AI system.

3. Protecting data and privacy

Privacy and data rights should also be a primary ethical consideration in the development of AI, especially given that the misuse of personal data has been a key driver of backlash against technological developments in the past.

Data used by the AI system must be secured against the evolving threats of unauthorised access, corruption and cyberattacks, so customers feel comfortable sharing their information.

Disregarding these principles can lead to a decrease in brand loyalty and customers jumping ship to a company that better protects them. 

Embed trust from the beginning 

To increase public trust and adoption of AI, the decision frameworks of AI systems should be clearly communicated, with stronger safeguards to ensure AI deployed in the real world is fair and transparent. There must also be appropriate regulations in place to ensure the AI is both safe and secure without violating people’s privacy.

As for any innovative technology, businesses that are looking to harness the power of AI should ensure they have a robust governance and ethical framework in place to manage the risks and help bridge the trust gap.

Trust needs to be embedded from the very beginning and in line with good practices, so that AI continues to be adopted in ways that are responsible and benefit society and the wider economy.

Share

Featured Articles

Pick N Pay’s Leon Van Niekerk: Evaluating Enterprise AI

We spoke with Pick N Pay Head of Testing Leon Van Niekerk at OpenText World Europe 2024 about its partnership with OpenText and how it plans to use AI

AI Agenda at Paris 2024: Revolutionising the Olympic Games

We attended the IOC Olympic AI Agenda Launch for Olympic Games Paris 2024 to learn about its AI strategy and enterprise partnerships to transform sports

Who is Gurdeep Singh Pall? Qualtrics’ AI Strategy President

Qualtrics has appointed Microsoft veteran Gurdeep Singh Pall as its new President of AI Strategy to transform the company’s AI offerings for customers

Should Tech Leaders be Concerned About the Power of AI?

Technology

Andrew Ng Joins Amazon Board to Support Enterprise AI

Machine Learning

GPT-4 Turbo: OpenAI Enhances ChatGPT AI Model for Developers

Machine Learning