Comment: Design governance is critical to AI impartiality

By Vidya Phalke
As many as one in four risk professionals are uneasy about their company’s AI governance. Vidya Phalke advises on how to make it better...

On the face of it, artificial intelligence (AI) should be unbiased, impartial and completely objective. After all, it’s a machine making decisions, not a person. People are fallible, suggestible, show favouritism and make judgment calls. Machines are logical, scientific and emotionless. Yet, they’re programmed by people; their ‘decision making’ results from analysis for which the rules are set by humans. It’s possible that bias, unconscious or otherwise, is built into AI before it’s even put to work and if that’s the case, strong AI governance and especially at design-time is paramount if the technology’s impact is to be both positive and ethical.

Risk exposure

AI is principally deployed to automate and increase efficiency. These are primary objectives for most businesses at any time, but perhaps in particular at times of crisis such as now, when companies must respond in an agile way to changing conditions and make calculated decisions to maintain resilience and protect operations.

AI increasingly supports the way companies assess and mitigate risk and the way they comply with internal and external rules and regulations. For many organisations, it is an essential tool in counteracting cyber threats. Overall, businesses are becoming more reliant on automation to turn the vast amounts of data they work with into meaningful information for informed decisions. 

However, AI on its own is not necessarily the panacea some assume each new technology will be. Like anything else, it has to be implemented properly, with checks and balances to ensure it works correctly, is used in the right way and delivers appropriate results for intended purposes.

It can be a strong tool in identifying and tackling risk, but without effective governance, AI can create risk exposure that the business must protect itself from. Here, there may be work for companies to do as one study revealed 80 per cent of risk professionals are not confident with their AI governance in place.

The regulatory landscape

To make a start, companies may look to both internal controls and external regulation for guidance. Earlier this year, it was reported that the European Commission would draft regulation for AI to help prevent its misuse. This is likely to be fraught with difficulties and there will be those who will argue freedom is needed to innovate if every potential of the technology is to be explored.

While continually monitoring the regulatory landscape, companies must establish their own governance, risk and compliance (GRC) measures around AI. Not just to ensure compliance with relevant external mandates, but also to ensure AI does not compromise corporate ethical practice and that any risk exposure from AI itself is discovered and addressed.

Companies will know that if they don’t safeguard ethical practice, they may face consequences. The key question here is about decision making. We must define the framework of where to draw the line on decision making. For example, we intuitively know that getting a new song recommendation from Spotify is not a big decision, however a bot deciding a change in morphine dosage during a surgery – clearly not something we will agree to.

This is where multi discipline thought leadership is needed. If decisions resulting from technology they have implemented are judged to be biased and potentially cause detriment, negative outcomes can result. Financially, these may be felt in the form of direct fines – if there is cause for regulatory bodies to investigate – but more so from reputational damage that could be sustained and which could impact share prices and customer loyalty.

Effective AI governance spans data and algorithms

In the same way that datasets form the cornerstone of AI, it is also at the root of GRC and therefore data management is central to AI design and usage governance for ethical outcomes. All relevant data should be considered for an AI tool’s design, analysis and informed decision making. Information silos in the business will impede this.

This is also a great opportunity for businesses to ensure that AI and the ethics they entail to be at the core of the design time – and not just waiting for standards, regulations, and policies. Standards are clearly needed but given the pace at which AI is evolving – waiting for standards is perhaps too late. We need to continue to look at all the experiments going on around us – from Alexa, to chatbots that help us buy clothes, to those that provide guidance on email management, and all the way to autonomous driving cars – and in case use a multi-disciplinary lens to evolve a design thinking that creates a foundation for trust. The best way to ensure this is to make sure cross-disciplinary teams are involved at each phase of AI rollout – from conception, to design, to prototyping, to training, to testing, and finally to rollout.

AI provides valuable tools for businesses to automate inefficient manual processes and achieve more for customers and stakeholders. Like any other technology, AI’s overall benefit will be the sum of both the benefits it provides and the way it is implemented and managed. If governance is lacking, or it is not robust, the balance that’s needed between these two factors won’t exist, leaving companies unnecessarily exposed to risk and, ultimately, failing to perform in the way they need to.

About Vidya Phalke is chief innovation and infosec officer at MetricStream


Featured Articles

New chips for artificial intelligence could be game changer

Energy-efficient and faster than its silicon-based counterparts, researchers have developed a new computer chip optimised for artificial intelligence

Securing the future of IoT with AI biometric technology

The world needs an IoT future, so It’s time to forget your password – for good. A new breed of AI-powered biometric security measures is required

EU networks plan to build a foundation for trustworthy AI

Artificial intelligence technologies are in their infancy, but commercial stakeholders must build them on foundations of trust, say research experts

ICYMI: Power users distrust AI and new National Robotarium


Reducing the impact of ecommerce with AI and ML

AI Applications

Now is the time for intelligent products and services

AI Strategy