Report calls for transparency in AI automation
A new report says there is too little transparency in algorithmic decision-making, leading to a risk of bias.
Although the Review into Bias in Algorithmic Decision Making from the UK’s Centre for Data Ethics and Innovation (CDEI) concentrates exclusively on the public sector, there is plenty of learning for private companies hoping to produce more ethical machine learning practices.
The report follows criticism of the UK government’s handling of exam predictions using an algorithmic approach. The government opted to blame the algorithm, which the authors cite as unacceptable – they say the ownership of blame should always lie with humans.
It quotes the UN special rapporteur Philip Alston, who said, “Government is increasingly automating itself with the use of data and new technology tools, including AI. Evidence shows that the human rights of the poorest and most vulnerable are especially at risk in such contexts. A major issue with the development of new technologies by the UK government is a lack of transparency.”
However, it also concedes that “despite concerns about ‘black box’ algorithms, in some ways algorithms can be more transparent than human decisions”.
The report recommends:
• The laws supporting equality and human rights should be updated to take AI into account
• Employment practice codes should also be updated to acknowledge AI
• The Home Office should define national policing bodies for data analytics
• National government should develop guidance to support local government
• Government should continue to support and invest in more diversity in the technology sector
• Regulators should provide clear guidance on the collection and use of data including protected characteristics
• The Office for National Statistics should open its data to a range of organisations to evaluate for bias in algorithms
• Sector and industry bodies should create technical guidance for bias detection and mitigation
• Organisations must receive guidance about their legal responsibilities when employing algorithmic decision making
• Government should review, and if necessary update, the clarity of existing equality laws in the face of algorithmic development
• The Equaliites and Human Rights Commission (EHRC) should bring algorithmic discrimination into its remit
• Regulators should bring algorithmic discrimination into consideration in their activities
• Regulators should work together to provide jointly issued guidance
• There should be mandatory transparency throughout the public sector on algorithms which have significant influence on decisions affecting people
• Public procurement should be updated to include expected levels of transparency and explainability in AI
The executive summary said: “This review has been, by necessity, a partial look at a very wide field. Indeed, some of the most prominent concerns around algorithmic bias to have emerged in recent months have unfortunately been outside of our core scope, including facial recognition and the impact of bias within how platforms target content (considered in CDEI’s Review of online targeting).
“Our AI Monitoring function will continue to monitor the development of algorithmic decision-making and the extent to which new forms of discrimination or bias emerge. This will include referring issues to relevant regulators, and working with government if issues are not covered by existing regulations.”
Google is using AI to design faster and improved processors
Engineers at Google are now using artificial intelligence (AI) to design faster and more efficient processors, and then using its chip designs to develop the next generation of specialised computers that run the same type of AI algorithms.
Google designs its own computer chips rather than buying commercial products, this allows the company to optimise the chips to run its own software, but the process is time-consuming and expensive, usually taking two to three years to develop.
Floorplanning, a stage of chip design, involves taking the finalised circuit diagram of a new chip and arranging the components into an efficient layout for manufacturing. Although the functional design of the chip is complete at this point, the layout can have a huge impact on speed and power consumption.
Previously floorplanning has been a highly manual and time-consuming task, says Anna Goldie at Google. Teams would split larger chips into blocks and work on parts in parallel, fiddling around to find small refinements, she says.
Fast chip design
They have created a convolutional neural network system that performs the macro block placement by itself within hours to achieve an optimal layout; the standard cells are automatically placed in the gaps by other software. This ML system should be able to produce an ideal floorplan far faster than humans at the controls. The neural network gradually improves its placement skills as it gains experience, according to the AI scientists.
In their paper, the Googlers said their neural network is "capable of generalising across chips — meaning that it can learn from experience to become both better and faster at placing new chips — allowing chip designers to be assisted by artificial agents with more experience than any human could ever gain."
Generating a floorplan can take less than a second using a pre-trained neural net, and with up to a few hours of fine-tuning the network, the software can match or beat a human at floorplan design, according to the paper, depending on which metric you use.
"Our method was used to design the next generation of Google’s artificial-intelligence accelerators, and has the potential to save thousands of hours of human effort for each new generation," the Googlers wrote. "Finally, we believe that more powerful AI-designed hardware will fuel advances in AI, creating a symbiotic relationship between the two fields.