Nov 27, 2020

Comment: Design governance is critical to AI impartiality

AI
Governance
Vidya Phalke
4 min
ai governance
As many as one in four risk professionals are uneasy about their company’s AI governance. Vidya Phalke advises on how to make it better...

On the face of it, artificial intelligence (AI) should be unbiased, impartial and completely objective. After all, it’s a machine making decisions, not a person. People are fallible, suggestible, show favouritism and make judgment calls. Machines are logical, scientific and emotionless. Yet, they’re programmed by people; their ‘decision making’ results from analysis for which the rules are set by humans. It’s possible that bias, unconscious or otherwise, is built into AI before it’s even put to work and if that’s the case, strong AI governance and especially at design-time is paramount if the technology’s impact is to be both positive and ethical.

Risk exposure

AI is principally deployed to automate and increase efficiency. These are primary objectives for most businesses at any time, but perhaps in particular at times of crisis such as now, when companies must respond in an agile way to changing conditions and make calculated decisions to maintain resilience and protect operations.

AI increasingly supports the way companies assess and mitigate risk and the way they comply with internal and external rules and regulations. For many organisations, it is an essential tool in counteracting cyber threats. Overall, businesses are becoming more reliant on automation to turn the vast amounts of data they work with into meaningful information for informed decisions. 

However, AI on its own is not necessarily the panacea some assume each new technology will be. Like anything else, it has to be implemented properly, with checks and balances to ensure it works correctly, is used in the right way and delivers appropriate results for intended purposes.

It can be a strong tool in identifying and tackling risk, but without effective governance, AI can create risk exposure that the business must protect itself from. Here, there may be work for companies to do as one study revealed 80 per cent of risk professionals are not confident with their AI governance in place.

The regulatory landscape

To make a start, companies may look to both internal controls and external regulation for guidance. Earlier this year, it was reported that the European Commission would draft regulation for AI to help prevent its misuse. This is likely to be fraught with difficulties and there will be those who will argue freedom is needed to innovate if every potential of the technology is to be explored.

While continually monitoring the regulatory landscape, companies must establish their own governance, risk and compliance (GRC) measures around AI. Not just to ensure compliance with relevant external mandates, but also to ensure AI does not compromise corporate ethical practice and that any risk exposure from AI itself is discovered and addressed.

Companies will know that if they don’t safeguard ethical practice, they may face consequences. The key question here is about decision making. We must define the framework of where to draw the line on decision making. For example, we intuitively know that getting a new song recommendation from Spotify is not a big decision, however a bot deciding a change in morphine dosage during a surgery – clearly not something we will agree to.

This is where multi discipline thought leadership is needed. If decisions resulting from technology they have implemented are judged to be biased and potentially cause detriment, negative outcomes can result. Financially, these may be felt in the form of direct fines – if there is cause for regulatory bodies to investigate – but more so from reputational damage that could be sustained and which could impact share prices and customer loyalty.

Effective AI governance spans data and algorithms

In the same way that datasets form the cornerstone of AI, it is also at the root of GRC and therefore data management is central to AI design and usage governance for ethical outcomes. All relevant data should be considered for an AI tool’s design, analysis and informed decision making. Information silos in the business will impede this.

This is also a great opportunity for businesses to ensure that AI and the ethics they entail to be at the core of the design time – and not just waiting for standards, regulations, and policies. Standards are clearly needed but given the pace at which AI is evolving – waiting for standards is perhaps too late. We need to continue to look at all the experiments going on around us – from Alexa, to chatbots that help us buy clothes, to those that provide guidance on email management, and all the way to autonomous driving cars – and in case use a multi-disciplinary lens to evolve a design thinking that creates a foundation for trust. The best way to ensure this is to make sure cross-disciplinary teams are involved at each phase of AI rollout – from conception, to design, to prototyping, to training, to testing, and finally to rollout.

AI provides valuable tools for businesses to automate inefficient manual processes and achieve more for customers and stakeholders. Like any other technology, AI’s overall benefit will be the sum of both the benefits it provides and the way it is implemented and managed. If governance is lacking, or it is not robust, the balance that’s needed between these two factors won’t exist, leaving companies unnecessarily exposed to risk and, ultimately, failing to perform in the way they need to.

About Vidya Phalke is chief innovation and infosec officer at MetricStream

Share article

Jun 10, 2021

Google is using AI to design faster and improved processors

AI
ML
Google
processors
2 min
Google scientists claim their new method of designing Google’s AI accelerators has the potential to save thousands of hours of human effort

Engineers at Google are now using artificial intelligence (AI) to design faster and more efficient processors, and then using its chip designs to develop the next generation of specialised computers that run the same type of AI algorithms.

Google designs its own computer chips rather than buying commercial products, this allows the company to optimise the chips to run its own software, but the process is time-consuming and expensive, usually taking two to three years to develop.

Floorplanning, a stage of chip design, involves taking the finalised circuit diagram of a new chip and arranging the components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge impact on speed and power consumption. 

Previously floorplanning has been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.

Fast chip design

In a new paper, Googlers Azalia Mirhoseini and Anna Goldie, and their colleagues, describe a deep reinforcement-learning system that can create floorplans in under six hours. 

They have created a convolutional neural network system that performs the macro block placement by itself within hours to achieve an optimal layout; the standard cells are automatically placed in the gaps by other software. This ML system should be able to produce an ideal floorplan far faster than humans at the controls. The neural network gradually improves its placement skills as it gains experience, according to the AI scientists. 

In their paper, the Googlers said their neural network is "capable of generalising across chips — meaning that it can learn from experience to become both better and faster at placing new chips — allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."

Generating a floorplan can take less than a second using a pre-trained neural net, and with up to a few hours of fine-tuning the network, the software can match or beat a human at floorplan design, according to the paper, depending on which metric you use.

"Our method was used to design the next generation of Google’s artificial-intelligence accelerators, and has the potential to save thousands of hours of human effort for each new generation," the Googlers wrote. "Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.

Share article