UK intelligence agency GCHQ sets out AI strategy and ethics

By William Smith
GCHQ has published a paper, known as “Ethics of AI: Pioneering a New National Security”, in which it set out examples of how it could use AI...

 

British intelligence agency GCHQ has laid out its plan to put artificial intelligence in national security.

GCHQ is the signals intelligence arm of the UK,and is responsible for gathering information as well as securing UK communications.

The organisation has published a paper, known as “Ethics of AI: Pioneering a New National Security”, in which it set out examples of how it could use AI going forwards.

AI in national security

Potential uses include fact checking and the detection of deepfake media, which has been mooted as a threat to democracy. Alongside that is mapping international trafficking networks, analysing chat rooms for evidence of child grooming and identifying potentially malicious software for cybersecurity purposes.

Director GCHQ Jeremy Fleming said: “AI, like so many technologies, offers great promise for society, prosperity and security. It’s impact on GCHQ is equally profound. AI is already invaluable in many of our missions as we protect the country, its people and way of life. It allows our brilliant analysts to manage vast volumes of complex data and improves decision-making in the face of increasingly complex threats – from protecting children to improving cyber security.”

An eye on ethics

The paper also saw the organisation consider how it would use AI ethically, fairly and transparently, alongside supporting the UK’s AI sector with an AI lab in its Manchester office, mentoring AI startups through accelerator schemes and supporting the creation of the UK’s national institute for data science and artificial intelligence, the Alan Turing institute, in 2015.

“While this unprecedented technological evolution comes with great opportunity, it also poses significant ethical challenges for all of society, including GCHQ,” said Fleming. “Today we are setting out our plan and commitment to the ethical use of AI in our mission. I hope it will inspire further thinking at home and abroad about how we can ensure fairness, transparency and accountability to underpin the use of AI.”

Share

Featured Articles

AI in SOC: Where Should Security Teams Look to Apply It?

As threats evolve, AI's continuous learning ensures robust protection that can prove invaluable for security operations centres

Swiss Re: Pharma, Not IT, to See Most Adverse Effects of AI

Swiss Re' AI report revealed surprising results showing pharmaceuticals stands to be the most adversely effected industry from the applications of AI

AI Safety Summit Seoul: Did it Meet Industry Expectations?

Before the summit, there were high hopes for meaningful outcomes - we see if industry leaders like EY's Beatriz Sanz Saiz thinks so

IBM's Salesforce Partnering Shows watsonx's Enterprise Reach

AI Strategy

Are Business and Government Diverging on AI Safety?

AI Strategy

Alteryx Industry-First AI Copilot Sees New Era of Analytics

AI Applications