Building Trustworthy AI: The Importance of Enterprise Ethics

Share
ETHICAL AI
With insights from SAS and Abnormal Security, we consider what ethical principles should guide AI development and how businesses can ensure accountability

As the AI landscape continues to change, the use of AI technologies – including generative AI (Gen AI) – continues to increase, with research by Salesforce finding 61% of us use it at work. But as more businesses seek to develop and deploy this technology, the global conversation is shifting towards how to do this safely.

AI ethics refers to a set of guiding principles that stakeholders follow to ensure that AI is used responsibly. This contributes to creating an overall AI approach that is safe and secure and also human-led - meaning that AI is less likely to result in job cuts.

This month, AI Magazine speaks with experts from SAS and Abnormal Security, examining how implementing stringent AI ethics principles not only keeps a business secure, but also maintains the trust of its customers.

Implementing ethical AI principles

Whilst legislation is starting to be implemented around the world, which includes the EU AI Act, it is important for organisations to have their own ethical strategies in place to ensure their workforces are being protected. This is particularly the case as use of AI continues to grow, citing the importance of businesses being aware of AI risks and implementing frameworks accordingly.

“The adoption of ethical principles can be ranked and prioritised according to the organisation’s data and analytical maturity,” says Prathiba Krishna, AI & Ethics Lead at SAS. “To keep trustworthy AI at the centre of innovation, we have six guiding principles: Human centricity, transparency, inclusivity, accountability, privacy and security and robustness.”

Developing ethical frameworks and setting clear industry standards to follow is a good starting point to harnessing responsible AI. Likewise, as Prathiba explains, incorporating ethics into the design of AI models (ie. diverse perspectives and datasets) can lead to more positive outcomes.

“They can also look to collaborate with academic institutions and develop multi-stakeholder initiatives to help ensure these approaches get implemented,” she adds.

When it comes to deploying ethical principles, it is important for businesses to guide AI development with transparency to maintain human accountability over the technology.

“Any company that develops AI or uses AI in their products should prioritise transparency as much as possible, with assurances around how the AI operates and how they manage user data,” cites Dan Shiebler, Head of Machine Learning at Abnormal Security. 

“Any good product will also prioritise human accountability over system behaviour – meaning, humans should be able to make the final decision when it comes to executing, and potentially undoing, any actions taken by AI.”

However, ethical challenges can arise when it comes to specific AI application domains such as healthcare or criminal justice.
When asked about these, Prathiba commented on how both these industries can have a significant impact on the individual.

“Healthcare can have a unique set of ethical challenges like misdiagnosis and inappropriate treatment, refusal of medication, loss of trust and health disparities when an ethical lens is not applied,” she says. “Within criminal justice, ethical challenges can arise like erosion of trust in the legal system, compromised fairness in bail decisions and incorrect judgements.”

Dan adds: “Most AI models today operate as a “black box,” offering very little visibility into how decisions are made. This can lead to biased or unsafe decisions. In industries such as criminal justice and medicine, where decisions can have significant, life-changing impacts on users, these risks are exacerbated and could cause direct or indirect harm to individuals if AI gets misused.”

Taking accountability: Keeping AI human-centric

In order to keep AI ethical, businesses must ensure that transparency and accountability are fostered within their AI systems.

This can be done by engaging stakeholders, establishing oversight mechanisms and providing education and training to ensure that AI is used responsibly throughout the workplace. This can help foster a positive culture within an enterprise, leading to greater public trust in AI, in addition to promoting inclusive and sustainable developments.

“AI systems should be accompanied by explanations and should also require humans to take responsibility for the decision of the AI system – in the same way that a manager takes responsibility for the actions of their employees,” Dan comments.

“Trustworthy AI is something we need to plan before the first line of code,” Prathiba adds. “It needs to continue throughout the AI lifecycle as a continuous process. Having capabilities in your AI platform is necessary, but technology is not enough – it also takes a comprehensive governance approach, involving people and solid processes.

“Transparency and accountability go hand in hand with AI systems. Enabling transparency also means to enable a degree of accountability. This also establishes data lineage and explains how model predictions are made.”

Continuous improvement processes, such as updating systems when new data is available, retraining models to become more relevant and adapting to environmental conditions, are necessary when it comes to implementing ethical business AI.

“The ability to understand and explain how the decisions have been arrived at by a model is a huge element of transparency,” Prathiba highlights.

In order to keep AI systems ethical, it has been suggested that having a human at the centre is essential to validate AI-made decisions. Any solutions that are generated are – in theory – more robust, which should inspire greater trust.

Prathiba says: “AI processes should be able to combine model outcomes and business rules to embed social aspects of the data. It’s important to build fail-safes and adopt redundancy measures to ensure critical decisions are reviewed or overridden by human operators when necessary.

“Ethical inquiry is important and for every AI use case we need to reflect on the guiding principles and above all ask ourselves these three questions: For what purpose? To what end? For whom might this fail?”

Businesses that develop or harness AI will also benefit from a chain of accountability that extends between individuals, companies and systems.

“Models need to run on hardware and be served by software, but we should hold the institutions that own these systems accountable for the behaviour of those systems,” Dan concludes. “This incentivises these people and institutions to create the appropriate guardrails.”

**************

Make sure you check out the latest edition of AI Magazine and also sign up to our global conference series - Tech & AI LIVE 2024

**************

AI Magazine is a BizClik brand

Share

Featured Articles

How Trump Scrapping AI Safety Regulations Impacts Global AI

US President Donald Trump's executive order removes federal oversight requirements for AI companies developing high-risk AI, shifting US tech regulations

SAP’s Enterprise AI Adoption Trends For 2025

SAP identifies five AI themes that will shape 2025, as enterprises pivot towards practical AI applications, anticipating tangible returns on investments

Global Tech Leaders Responses to The UK’s AI Action Plan

Tech leaders including Nvidia, Dell, Siemens & ServiceNow comment on the UK’s AI Action Plan to invest in infrastructure, upskilling & data centres

AI Adoption Challenges for Australian Tech Leaders

AI Strategy

Why Dynatrace Signs Analytics Deal With F1 Team VCARB

AI Strategy

NTT: How Global CEOs Are Planning For an AI Investment Surge

AI Strategy