UK and US research labs conduct military AI demonstrations

The back-to-back exercises aimed to address the challenge of making AI agile, adaptable, trustworthy and accessible under different military use cases

The UK’s Defence Science and Technology Laboratory (DSTL) and the US Air Force Research Laboratory have demonstrated an ‘AI toolbox’ aimed to help boost the understanding and limitations of AI on the battlefield of the future.

The set of exercises, hosted by the joint, international signatories of the Autonomy and Artificial Intelligence Collaboration (AAIC) Partnership Agreement, represents part of a five-year partnership agreement aimed at accelerating joint UK-US development and sharing of AI technology and capabilities.

Challenge of making military AI agile, adaptable and trustworthy

First demonstrated in the Project Convergence 22 (PC22) experiment at the US National Training Centre at Fort Irwin and then at the Salisbury Plain Training Area in Wiltshire, England, the exercises aimed to address the challenge of making AI agile, adaptable, trustworthy and accessible to the warfighter under different US and UK military use cases. This also included a process to ensure that the AI being developed and delivered is robust for the mission and that any limitations of the AI are understood by the user, key steps in developing user trust in the technology.

The focus of the joint AI Taskforce for PC22 was to deploy, for the first time, the jointly developed UK-US AI Toolbox. The toolbox draws together data collected from UK-US uncrewed ground vehicles (UGVs) and uncrewed aerial vehicles (UAVs), data labelling, rapid AI training and retraining on deployed tactical high-performance computers (HPCs) to deliver mission-specific AI.

“By deploying our AI Taskforce to PC22, we learned what this technology would mean to the warfighter and identified further challenges which require research and development to enhance a future operational capability,” commented UK AI Toolbox lead Todd Robinson. “It is important we deploy AI into trials more regularly to drive the maturation and operationalisation of AI.”

US AI Toolbox lead, Dr Lee Seversky, added: “It is becoming more and more critical to be able to adapt AI to meet changing mission requirements, operating environments, and accelerated decision timelines in-mission, all while ensuring it is trusted and understandable to the military users.

“The joint AI Toolbox, with its ability to adapt and deliver AI for different joint military missions is critical. AI flexibility and speed is key to moving us towards this goal.”

The second deployment of the AI Toolbox was on UK platforms as part of the DSTL HYDRA project’s Integrated Concept Evaluation (ICE) trials on Salisbury Plain.

ICE4 demonstrated that UK-US developed algorithms from the AI Toolbox could be deployed onto a swarm of UK UAVs and retrained by the joint AI Taskforce at the ground station and the model updated in flight, a first for the UK. This demonstrated how the AI toolbox can adapt to new data sources, platforms, and operating locations to rapidly update the AI deployed onto Autonomous Systems.

UK Autonomy Programme manager John Godsell said: “It has been a hugely exciting year for the UK-US collaboration. Firstly being part of Project Convergence 2022, a US experiment at an epic scale and then rapidly redeploying the team to Salisbury Plain in the UK just four weeks later as part of our experimentation campaign on the key technologies of AI and Autonomy.

“These are rapidly emerging technologies that we must be able to understand and grasp to ensure that our warfighters have the tools they need to win on the battlefields of the future.”


Featured Articles

ICYMI: OpenAI spots fakes and Saudi Arabia’s OffWorld robots

A week is a long time in artificial intelligence, so here’s a round-up of the AI Magazine articles that have been starting conversations around the world

AI “virtuous circle” could help in battle against cybercrime

New research by security company Darktrace and IDC says companies need to adopt a holistic approach if they are to successfully prepare for a cyber attack

OpenAI helps spot AI text before it gets used for cheating

OpenAI’s AI Text Classifier aims to spot content generated by AI platforms before it can be used by bad actors, but the company admits it's not perfect

OffWorld takes robot swarms to Ma’aden mines in Saudi Arabia


New online degree promises to open up AI education for all

AI Strategy

ICYMI: Microsoft’s plans for quantum and Open AI investment