What is the UK Government's National AI Strategy?
At the end of September, the UK Government revealed its National Artificial Intelligence Strategy. It is described as ‘representing the start of a step change for AI, recognising the power of the technology to increase resilience, productivity, growth and innovation across private and public sectors.’
In this feature, we talk to businesses about the move and what it could mean for enterprises of all sizes, education and training, ethical and moral consideration and its development going forward.
Three pillars
The strategy is made up of three main pillars; investing in the long-term needs of the AI ecosystem, ensuring AI benefits all sectors and regions and governing AI effectively.
AI Magazine asked Harvey Lewis, Associate Partner and Chief Data Scientist in EY’s tax practice, what he thought about them. “Firstly, the new AI strategy assumes that innovation in AI is largely driven by academia and the start-up community, with commercialisation and scale-up via bigger business. But most recent advances have been created by big tech firms,” he commenced. “Boosting the UK’s competitiveness in AI depends not just on investment in academia, but also in incentivising UK-based tech firms to open up their technology and data, via tax credits or grants.”
Of the second pillar he commented: “The strategy identifies the main barriers to progress as access to data, infrastructure and talent. Recent EY research highlights just how acute these challenges are for SMEs. For example, in the region 90% of large organisations have already adopted AI or plan to do so soon, but this falls to around 48% for SMEs.”
In terms of pillar three, Lewis suggests: “The government could consider establishing a set of principles by which AI systems must operate, similar to GDPR. It could also identify key roles and who should be held accountable for compliance. The focus on regulating the operation, not just development of AI systems is key.” Lewis concluded ensuring outputs and outcomes are fair, transparent, ethical and safe, would help the UK pivot from a ‘risk-first’ approach to that of ‘opportunity first’.
Upskilling to compete
NVIDIA, the GPU designer, has contributed to the Government’s AI strategy through its Cambridge-1 supercomputer - the country’s most powerful AI supercomputer which is dedicated to advancing healthcare research.
David Hogan, VP Enterprise, EMEA at the company says he thinks the new strategy is an important step in furthering the UK’s advantage as a global leader and that it is good to see it focusing on computational capabilities, as this was vital to success. He adds: “However, AI researchers and start-ups need access to the right tools. We, like many other countries, face shortages of skilled AI experts. Upskilling is now a key requirement for the strategy’s successful deployment. NVIDIA has been able to train more than 7,000 developers in the UK via our Deep Learning Institute and we continue to see strong demand for hands-on training, both in industry and academia.”
He also added that supporting and investing in the start-up community was another area where the UK needed to keep up with other Western nations and said the company was now inviting applications from healthcare start-ups for compute time on Cambridge-1.
Dr Ems Lord, Head of Maths Support Programme, NRICH at Cambridge University said the growth in AI means more people will have a stronger need for applying mathematical knowledge and skills, without relying completely on AI, because mistakes are made.
She explains: “It’s important that at school we value teaching youngsters number sense through reflecting on answers to mathematical problems, rather than just relying on methodical calculation methods.”
Collaboration and diversity
According to Adam Gibson, Co-founder of AI ecosystem builder, Skymind, collaboration between parties and tapping into existing expertise is vital to secure the strategy’s long-term success.
He gives the example of the Eclipse Foundation, which can help the Government and businesses transition to AI economies effectively. He added diversity was equally important.
“The best outcomes for enhanced AI services require diversity behind those creating and implementing the technology. That means attracting more women and people from different social and ethnic backgrounds. Getting the right mix of AI creators will also require mandatory STEM education at school and mentoring programmes encouraging AI career paths. We can’t have progress in AI without diversity, we need to design economies which reflect the population it serves,” he says.
Transparency in data, design and disclosure
Kasia Borowska, Founder and MD of Brainpool.ai, emphasises the need for transparency on all levels to avoid resistance or fear around the strategy. She says data transparency means knowing what data was used, where it came from, how it was labelled and the features it included. In terms of design, it is knowing the types of decisions being made by technology, the role of humans approving the decisions and when people are interacting with AI tools. Finally, disclosure transparency means processes and guidelines for remedying issues of bias, abuse and where things have gone wrong.
This is echoed by Anant Ranganathan, VP Head of Data North America at First Derivative.
“The speed and scale at which machine intelligence is proliferating across all industries require early identification and planning for new risk types that emerge. The most common concern is one of privacy risk to the individual and algorithmic bias. Another would be the moral risk of making decisions not in the best interest of an individual or society, while only benefiting the chosen few,” he says.
Proof of the pudding
Peter van der Putten, Director of Decisioning and AI Solutions, Pegasystems and Assistant Professor, AI and Creative Research at Leiden University says the strategy is a sign that AI use is becoming more pervasive and shows a commitment from the Government that it should benefit all.
“What is also interesting is that the document lists certain immediate actions that should benefit long-term needs such as improving AI skills, increasing private and public R&D funding and pushing publication of open data. However, the proof of the pudding is always in the eating, so it will be interesting to revisit the action plan in six-12 months and see how many actions have been ticked off. Also, one should put one’s money where one’s mouth is, so these plans will need to be accompanied by proper levels of investment. Hard commitments on these investments are still lacking."