Formulating AI strategy with support from the private sector

As more governments incorporate AI technology into strategies, we look at the crucial relationships needed between the public and private sector

Undoubtedly, artificial intelligence (AI) holds great economic, social, security and environmental potential. Promising a better future for populations with the creation of national wealth and jobs, AI can enable both the private and public sectors to do more with less, fostering rapid development.

As previously shown by AI Magazine, nations are now investing in AI, quantum computing and synthetic biology to bolster their security defences as national security threats across the globe become more complex.

Now, nations across the world have woken up to the power of AI and as Lee Tiedrich, Ethical Technology Professor at Duke University comments: “There's a recognition that AI is playing a bigger role in society. The technology has been around for decades. We've seen a lot of developments recently in large part because of the proliferation of data, the availability of algorithms, and the lowering of computing power.”

Echoing this, Larry Lewis, Vice President and Director of the Centre for Autonomy and Artificial Intelligence at CNA, said: “AI is the latest technological solution to give governments an edge, it is believed to be particularly potent in both its ability, but also the many roles that it can play. It's a natural thing for governments to be looking for the next big thing, and it turns out that is AI.”

As nations across the globe formulate AI strategies and incorporate this technology into existing ones, it is imperative that considerations into ethics and regulations are made to ensure that it is effective.

Forging good relationships between the public and private sector

AI continues to be a key driver for digital transformation within the private sector. The uptake of this technology has grown exponentially over the past decade, however, governments now largely lag behind with its adoption of this technology.

Discussing this disparity, Lewis said: “Governmental resources and research is dwarfed by what the industry is doing and so that changes the dynamic in a fundamental way. Now governments really are dependent, in a new way, on what industry is doing and what it's developing.”

In order to close this gap and ensure the public sector successfully incorporates AI into its strategy, Tiedrich explains the relationship between the public and private sectors has never been more crucial: “This will help them understand how the technology works, get those perspectives, understand what industry's doing, what civil society's concerns are and those of academia and other stakeholders.”

“To forge good relationships between the public and private sectors, round tables can be important for real-time dialogue. Academia can play a really important role in terms of bringing people together to try to forge these dialogues and these exchanges of information to lead to better-informed policy,” she added.

These dialogues will educate policymakers and government leaders and without this education, the technology will not be harnessed to its full potential. 

Commenting on the importance of this education, Lewis said: “There needs to be education on a number of different fronts. So education on the governmental side to what AI is and what AI really can do, and what AI is not. Because there are lots of misconceptions out there and fears. Some are founded and some are not.” 

“But then on the flip side, we also need the private sector to understand what these uses for governments are,” he continued.

The need for AI education permeates throughout society. Without education at all levels, citizens won’t understand the benefits of this technology within society and the younger generation will not be able to tackle the new jobs the development of AI technology will bring. 

“We want to make sure that the next generation of students has chances to learn but also educate the public,” said Lee.

She added: “We're not at a point right now where we have general AI on the marketplace and education will help us get there.”

Creating direction with AI regulation

As many nations across the globe adopt this technology, nations are in a race to become an AI superpower. “There should be speed, but there should also be direction. Right now, we're all speed and no direction,” said Lewis.

To gain some direction, policymakers need to consider the regulation and laws around creating and implementing this technology. Within many nations, laws have evolved incrementally and are updated regularly. 

AI and the regulation around it should be treated in the same manner, as Lee explains: “It’s just really important to get some good incremental steps down.”

“One key step is getting some of the standards in place and I know a lot of organisations are working on that and we need to continue to forge more collaboration around that,” she added.

Many of these standards need to focus on the ethical considerations that come with AI technology. Policymakers need to strike the right balance within their policy to tackle ethical issues such as bias, privacy and discrimination.

Tacking the ethical issues and bias that comes with AI technology

Bias within AI is an issue faced by many data scientists within the public sector as this issue can originate from one of two ways. An AI algorithm can contain data bias due to the algorithms being trained using biased data. The algorithms can also contain societal AI bias where assumptions and norms within society can cause analysts to have blind spots when it comes to analysing datasets.

Discussing AI bias and policy, Lewis said: “We need to be mindful. We need to be deliberate about both understanding potential biases and what those effects will be. It's also useful to remember not all biases are bad. You can actually build in biases for certain applications to actually get better performance. However, we need to be deliberate about addressing biases that will lead to bad things.”

Lewis outlined the steps that needed to be taken to ensure issues around regulation, ethics, privacy and equality are balanced whilst promoting innovation in this space. She said: “Policymakers should not view regulation as presenting an “either/or” choice between promoting innovation on the one hand and fostering ethical use, fairness and equality, and privacy on the other.  Instead, they should strive to craft regulations that accomplish both goals. For example, organisations are more likely to purchase and adopt AI products if they know that they can trust them. Individuals are more apt to use AI products that they can trust.”  

Concluding, Lee stressed the importance of aligning both innovation and regulation as it will, in turn, create a better technology system for government organisations: “Recognising that these goals are complementary and not mutually exclusive provides a good framework for crafting risk-based and proportionate regulations and standards that promote  fairness, explainability, transparency, safety, and accountability as well as innovation.”


Featured Articles

AI in SOC: Where Should Security Teams Look to Apply It?

As threats evolve, AI's continuous learning ensures robust protection that can prove invaluable for security operations centres

Swiss Re: Pharma, Not IT, to See Most Adverse Effects of AI

Swiss Re' AI report revealed surprising results showing pharmaceuticals stands to be the most adversely effected industry from the applications of AI

AI Safety Summit Seoul: Did it Meet Industry Expectations?

Before the summit, there were high hopes for meaningful outcomes - we see if industry leaders like EY's Beatriz Sanz Saiz thinks so

IBM's Salesforce Partnering Shows watsonx's Enterprise Reach

AI Strategy

Are Business and Government Diverging on AI Safety?

AI Strategy

Alteryx Industry-First AI Copilot Sees New Era of Analytics

AI Applications