EU networks plan to build a foundation for trustworthy AI

Artificial intelligence technologies are in their infancy, but commercial stakeholders must build them on foundations of trust, say research experts

Research experts in the European Union are building a series of networks in order to ensure companies and consumers can trust the artificial intelligence technologies being developed in universities and labs across the continent. 

In Sweden, Linköping University is working on TAILOR – an abbreviation derived from the principles of integrating, learning, optimisation and reasoning – an EU project that has drawn up a research-based roadmap designed to guide research funding bodies and decision-makers toward the development of trustworthy artificial intelligence (AI).

TAILOR’s Strategic Research and Innovation Roadmap (SRIR) aims to boost research by clearly defining the major research challenges. The roadmap will be written by a Roadmap Editorial Board (REB) made up of volunteer partners. TAILOR is one of six networks set up by the European Union to strengthen research capacity and develop future AI.

“The development of artificial intelligence is in its infancy,” says Fredrik Heintz, Professor of Artificial Intelligence at LiU, and coordinator of the TAILOR project. “When we look back at what we are doing today in 50 years, we will find it pretty primitive. In other words, most of the field remains to be discovered. That's why it’s important to lay the foundation of trustworthy AI now.”

The researchers have defined three criteria for trustworthy AI: it must conform to laws and regulations, it must satisfy several ethical principles, and its implementation must be robust and safe. 

“Take justice, for example,” says Heintz. “Does this mean an equal distribution of resources or that all actors receive the resources needed to bring them all to the same level? We are facing major long-term questions, and it will take time before they are answered. Remember – the definition of justice has been debated by philosophers and scholars for hundreds of years.” 

Basic research into artificial intelligence must be a priority

The project will focus on comprehensive research questions and will attempt to find standards that can be adopted by all researchers involved in AI. But Heintz is convinced that global business can only achieve this if basic research into AI is given priority.

“People often regard AI as a technology issue, but what's really important is whether we gain societal benefit from it,” says Heintz. “If we are to obtain AI that can be trusted and that functions well in society, we must make sure that it is centred on people.”

Many of the legal proposals written within the EU and its member states are written by legal specialists, says Heintz, but these people lack expert knowledge about artificial intelligence.

“Legislation and standards must be based on knowledge,” says Heintz. “This is where we researchers can contribute, providing information about the current forefront of research, and making well-grounded decisions possible. It’s important that experts have the opportunity to influence questions of this type.”

Share

Featured Articles

ICYMI: Visual search engine future and OpenAI’s new ChatGPT

A week is a long time in artificial intelligence, so here’s a round-up of the AI Magazine articles that have been starting conversations around the world

OpenAI’s new ChatGPT release has got the Internet talking

OpenAI’s conversational chat platform has taken the Internet by storm this week, but the company says work was required to refuse inappropriate requests

Spenser Skates: building a unicorn company out of the ashes

Amplitude CEO Spenser Skates on founding the unicorn company out of a voice recognition app, and solving the problem of knowing what users want

Good Things: Will wearables transform care in the community?

Technology

AI tools of the trade to help design faster, cheaper chips

Technology

Machine learning critical for trade surveillance, say banks

Machine Learning