Larry Lewis, CNA: leveraging AI to mitigate civilian harm

AI Magazine spoke to Larry Lewis, Director of the Centre for Autonomy and Artificial Intelligence at CNA, to learn about his work with AI and civilian harm

With a PhD in Physical Chemistry, Larry Lewis shifted from analysing data in a laboratory to drawing upon data from real-world military operations.

“The world is now my laboratory,” explained Lewis.

Throughout his career, Lewis has dedicated himself to combatting the ever-present issue of civilian casualties in warfare.

His keen interest in the power of data has drawn Lewis to look into events in Iraq, Afghanistan and many other battlefields to see if he could support militaries as they receive more pressure to reduce the number of civilian casualties.

He continued: “I was drawn to going to the battlefield, for context, but also getting the data to challenge common assumptions, the way people think about civilian casualties. The question I always ask is, “what does the data say?””

These false assumptions meant that the military repeatedly said it didn’t need help with the issue, even when rising levels of civilian casualties in multiple conflicts seemed to say differently.

The Joint Civilian Casualty Study

Despite repeated resistance, Lewis’ sheer determination meant he kept looking for opportunities to analyse data to better understand how civilian casualties happen and develop practical ways of reducing them. 

One such opportunity was the Joint Civilian Casualty Study, supporting US and international forces in Afghanistan in the face of increasing civilians. He worked with Dr Sarah Sewall on this groundbreaking study.

“The study combined this data and the research foundation I had built. But we also travelled to Afghanistan. 

“General Petraeus sponsored the work and even flew me around Afghanistan in his jet. We met with forces all around the country to understand the challenges and the context. Data is invaluable, but you have to understand the context as well. General Petraeus called it the first comprehensive assessment on civilian harm and it was,” added Lewis.

Discussing his work in Afghanistan, Lewis said: “ I gathered all the data on civilian harm incidents and analysed them, and I started seeing patterns. This gave us a clearer picture of how civilian harm happens, and it was fundamentally different from what the military thought; they had this misunderstanding about how civilian harm happened. They would put measures in place to fix it but they wouldn’t work as they didn't understand how it happened in the first place.”

The military started acting on his evidence-based recommendations and civilian casualties decreased as a result.

“Following this, the state department called as they wanted my expertise on civilian casualties to inform US policy. To do this, I joined the State Department as a senior advisor for civilian harm. I drafted an executive order on civilian casualties that President Obama signed, making national commitments to better protect civilians,” explained Lewis.

Utilising AI in a pioneering way

Lewis was also part of the US delegation to the UN talks on autonomous weapons, intended to increase understanding of autonomous technology and address concerns around the technology in war. While the primary focus has been on understanding and mitigating risks, in his most recent work Lewis highlights the opportunities for technology to better protect civilians and gives specific examples that militaries can start with for using AI to spare civilians in war. 

The Director of the Centre for Autonomy and Artificial Intelligence at CNA did highlight the wins he has had in his career: “There have been some victories, such as helping to contribute to reduced civilian casualties in the field, the Executive Order on civilian casualties, and the US AI strategy that was published in 2018. Within it, it says the US will use AI to help reduce civilian casualties. That’s a sign of greater awareness that we can better protect civilians with technology.”

He added: “There's really no one else that's done this work of analysing civilian casualties. I don't understand it, honestly. But, my work shows that there are practical ways to better protect civilians from harm. We can do better.”

CNA: AI brings us a step closer to mitigating harm

Recently, Lewis and CNA have released a summary of its ‘Leveraging AI To Mitigate Civilian Harm’ report. This report highlights the first key step in answering two questions:

  • How can we use AI to protect civilians from harm?
  • How can AI be used to lessen the infliction of suffering, injury, and destruction of war?

Although many concerns have been voiced around the implementation of AI, including bias and the desire to maintain human judgement, the report outlines how to implement this technology to successfully mitigate risk.

It says: “Finding linkages between the risk factors we have observed in real-world operations and specific potential applications of AI brings us a step closer to mitigating harm.”

It is without a doubt that AI can be a key tool in reducing civilian harm, however, the report does stress that there is no solution that will completely eliminate the problem.

Military operations will always have a non-zero risk to civilians, but through this research, CNA shows that AI can be used to help address patterns of harm and reduce its likelihood.

To read the full summary of the report, click here.

Share

Featured Articles

Microsoft AI: Rumoured AI Product Could Advance Global Tech

Microsoft continues to invest in enterprise AI around the world, with new models expected to continue to expand its industry-leading portfolio

TM Forum leads the way for the AI-Native telco at DTW24

TM Forum’s global industry summit returns to Copenhagen to set out path for telco and tech leaders to future-proof their AI-Native journey

5 Ways That Businesses Can Keep Pace With AI Innovation

AI Magazine considers some of the ways that enterprise organisations can accelerate their AI progress to stay ahead in the digital revolution

HPE: Businesses Face Overconfidence Trap in AI Strategy

AI Strategy

AMD at 55: Strategy is Powering Advancements in AI

AI Strategy

Anthropic Claude AI Chatbot is Now Available as an iOS app

AI Applications