What is NATO’s new artificial intelligence strategy?
The North Atlantic Treaty Organization (NATO), which is an alliance of 30 countries from Europe and North America that exists to protect the people and territory of its members, has announced that it would adopt an 18-point AI strategy and launch a “future-proofing” fund with the goal of investing around US$1 billion.
Artificial Intelligence (AI) is changing the global defence and security environment and now offers NATO an opportunity to strengthen its ‘technological edge’.
The new strategy outlines how AI can be applied to defence and security in a protected and ethical way. As such, it sets standards of responsible use of AI technologies, in accordance with international law and NATO’s values. It also addresses the threats posed by the use of AI by adversaries and how to establish trusted cooperation with the innovation community on AI.
Artificial Intelligence is one of the seven technological areas which NATO Allies have prioritised for their relevance to defence and security. These include quantum-enabled technologies, data and computing, autonomy, biotechnology and human enhancements, hypersonic technologies, and space.
Responsible use of AI
Allies and NATO commit to ensuring that the AI applications they develop and consider for deployment will be in accordance with the following six principles:
- Lawfulness: AI applications will be developed and used in accordance with national and international law, including international humanitarian law and human rights law, as applicable.
- Responsibility and Accountability: AI applications will be developed and used with appropriate levels of judgment and care; clear human responsibility shall apply in order to ensure accountability.
- Explainability and Traceability: AI applications will be appropriately understandable and transparent, including through the use of review methodologies, sources, and procedures. This includes verification, assessment and validation mechanisms at either a NATO and/or national level.
- Reliability: AI applications will have explicit, well-defined use cases. The safety, security, and robustness of such capabilities will be subject to testing and assurance within those use cases across their entire life cycle, including through established NATO and/or national certification procedures.
- Governability: AI applications will be developed and used according to their intended functions and will allow for: appropriate human-machine interaction; the ability to detect and avoid unintended consequences; and the ability to take steps, such as disengagement or deactivation of systems, when such systems demonstrate unintended behaviour.
- Bias Mitigation: Proactive steps will be taken to minimise any unintended bias in the development and use of AI applications and in data sets.
“In order to maintain NATO’s technological edge, we commit to collaboration and cooperation among Allies on any matters relating to AI for transatlantic defence and security. NATO and Allies can help accelerate these efforts by building on the existing adoption efforts of several NATO and Allied bodies.” says NATO in a summary of their AI strategy.